Voice activated text selection supported by AI agent on Android by mid-2026
➕
Plus
3
Ṁ150
2026
53%
chance

A state-of-the-art mainstream Android phone will have a feature that I can enable or pay for up to like 50$ USD a month where I can use voice to get an LLM agent to select the right text for me. For example, tell it to select an address within a Google doc or a browser page view I'm on. Rather than fighting with whatever UI the page has.

It has to be out and usable to me and it has to be able to select an address on a web page that I'm looking at. Just using voice after hitting a button or whatever to trigger the agent. Use cases would be like in Google maps. You see the address on the screen and you just say hey. Copy the address so that you can paste it into another app or you're in chat gpt and it's giving you an address but the text selection unit is not good at all so you just say copy the address that I'm looking at. The instruction complexity does not have to be high and it doesn't have to be a full intelligent LM interface. For example, you don't need to be able to say things like copy the address and then reverse it .

Extent:

  • At least 50% of the major apps that use text have to support it among: chatgpt, Claude, YouTube, Wikipedia, Google docs maps Firefox Google calendar, discord

  • It's hard to make a call to do it here because so many of those apps either don't use any kind of standard open representation of text like HTML and also a lot of them treat text as some kind of pseudo private item where although they'll let you see it, if you try to actually select large blocks of text, they aggressively fight you such as discord, PDFs, Banks, etc

Nevertheless, I think a lot of cases this will be either a clear yes or no - will muttering short phrases to your llm agents to do stuff with the things you see on your screen be possible and will it be widespread or won't it be by June 31st, 2026

Get
Ṁ1,000
and
S1.00