I really like using Siri to get things done when I’m not able to use my phone normally. For example, when cooking I can quickly add things to my shopping list as I’m using them, so I’ll remember to buy more the next time I’m at the supermarket. Or if I’m driving somewhere, I can easily control my music, or reply to a friend to let them know if I’m running late.
When Siri works, it’s brilliant, but there are times it can be incredibly frustrating to use.
On it… still on it…
Every year at WWDC we hear from Apple that Siri can do more and more things entirely on-device without needing the Internet, but in practice it still seems to suffer from connection issues (even when all my other devices are fine). This usually manifests as Siri responding with the phrase:
On it….. still on it….. something went wrong!
As soon as Siri answers any request with “on it…” I know with 100% certainty that the request is going to fail. Even worse, if you immediately ask Siri to do the same thing again, it will then typically succeed! I really wish Siri would just retry the request itself silently, and save me from hearing that dreaded phrase again.
Split personality
I have a couple of HomePod minis (or is it HomePods mini?), one in the living room and one in the kitchen. When cooking it’s handy to set various timers, so obviously I ask Siri to do that, but if I go into the living room, and ask Siri to check on the status of the timer, it acts like it has no idea what I’m talking about.
Me: “Siri, how long’s left on the timer?”
Siri: “There are no timers on HomePod.”
Me: *sigh* “Siri, how long’s left on the kitchen timer?”
Siri: “There are no timers on HomePod.”
Me: *SIGH* *walks to Kitchen* “Siri, how long’s left on the timer?”
Siri: “There’s a timer with 4 minutes left.”
Me: (╯°□°)╯︵ ┻━┻
I found something on the web
I’ve also had interactions where Siri gives me an example of some phrases I can use, only for it to turn around and say it has no idea what I’m talking about when I try to use them. Or it just abandons any attempt at understanding you and does a web search for what you asked. This usually isn’t very helpful, and it’s completely pointless on HomePod, given it lacks a display. Siri will chastise you in that case, and tell you to “ask again from your iPhone”.
When it comes to memory, sometimes it will forget what you were talking about mere seconds earlier, forcing you to repeat your request in full, trying to get the syntax correct. It’s like typing into a command line, rather than having a conversation.
By comparison, when this does work it feels so much more natural. Asking about the weather, then following that up with “and what about tomorrow?” flows quite nicely. It can also be quite clever, for example, if you’re asking about “tomorrow”, but the time is after midnight, it will check if you actually meant today, which is probably what most people would mean in that case.
SiriGPT?
Can an LLM like ChatGPT help here? I’ve seen a few articles this week claiming that’s exactly what Apple is working on for iOS 18, and I think it would make a big difference. ChatGPT is already so far ahead of Siri simply in terms of how natural sounding the conversations with it can be. They can be quite convincingly real.
I think it would substantially improve the experience if Apple could integrate those conversational features into Siri, but they will need to be very careful to handle the fact that LLMs hallucinate a lot, which is to say they can generate output that sounds plausible, but is either factually incorrect or totally unrelated.
Although Apple hasn’t jumped on the current AI bandwagon yet, they’ve actually been using machine learning (ML) technology in their products for a while now. They tend to use ML in more subtle ways, such as separating a subject from the background allowing portrait mode to be applied to your photographs, or in real-time during video calls. It also powers the Visual Look Up feature that helps you identify people, animals, plants, and more. There are tons of little features like that throughout Apple’s operating systems that rely on ML behind the scenes.
The good news is Apple’s privacy focus, and the presence of the Neural Engine in all their CPUs, means they are able to run a lot of the ML models entirely on-device. I’d expect no less from a next-generation Siri, and for a smart assistant with so much access to your personal data, this can only be a good thing.