Be Tech Ready!!
Artificial IntelligenceOperating SystemPhones

Startups are fighting it out to create the iPhone of AI

The race to create the iPhone equivalent in the field of artificial intelligence is getting intense. Rabbit, a tech startup, threw its hat in the ring recently by introducing a compact, orange, walkie-talkie-like gadget. The company claims that this device can employ “AI agents” to perform tasks for the user.

During a prerecorded keynote at the Consumer Electronics Show in Las Vegas, Jesse Lyu, the founder of Rabbit, demonstrates the device’s capabilities. He casually asks the gadget to plan a trip to London, and the keynote reveals the device crafting an itinerary and making travel arrangements. Lyu goes on to order a pizza, book an Uber, and even gives the device a lesson on creating an image using Midjourney.

The Rabbit r1, the latest entry in a growing hardware trend, falls into the category of portable AI-first devices. These gadgets engage with users through natural language, ditching screens and app-based operating systems.

Priced at $199, the r1 competes as a more budget-friendly option against the Humane Ai Pin, a $699 wearable device introduced in November with similar features. It also challenges the $299 Meta and Rayban smart-glasses, equipped with an AI-powered assistant. Noteworthy tech investors are optimistic that recent strides in AI, such as large language models (LLMs), will usher in exciting possibilities for personalized computing.

Also Read: What are the odds of AI driving human extinction?

Rabbit r1 works on a “large action model”

Sam Altman, the CEO of OpenAI, has invested in Humane. There are rumors that Altman and Masayoshi Son from Softbank are discussing the creation of a distinct AI hardware product in collaboration with Jony Ive, the designer behind the iPhone. 

As per information from three individuals in the know, Ive and Altman aspire to develop a device offering a “more natural and intuitive user experience” for interacting with artificial intelligence. Their inspiration stems from how the touchscreen tech on the first iPhone transformed our interaction with the mobile internet. Son is providing financial support for the initiative and is reportedly advocating for the significant involvement of Arm, a chip design company in which Son holds a 90 percent stake.

Rabbit, on the other hand, has secured funding of $30 million, primarily led by billionaire Vinod Khosla’s venture capital firm, Khosla Ventures. The prevailing mindset among these billionaires is that whoever can nail the right hardware form factor will strike it big in the era of AI.

According to Lyu’s keynote introducing the device, Rabbit’s r1 operates on a fresh AI system known as a “large action model.” He pointed out a drawback of large language models, the kind used in tools like ChatGPT, stating that they face challenges when it comes to executing actions in the real world.

Rabbit respects user’s privacy

In contrast, Rabbit’s large action model is trained using graphical user interfaces found on websites and apps. This allows it to navigate interfaces created for humans and carry out actions on their behalf.

“Things like ChatGPT are extremely good at understanding your intentions, but could be better at taking actions,” Lyu said. “The large language model understands what you say, but the large action model gets things done.”

For r1 to handle tasks such as booking vacations, ordering pizza, and getting an Uber, users have to log in to their different accounts through Rabbit’s web portal. The AI agents, referred to as rabbits by Rabbit, operate on an external server instead of the device, utilizing the logged-in accounts to carry out the required actions.

Rabbit assures that each user gets their own “dedicated and isolated” space on their secure servers, emphasizing that they do not retain user passwords. “Rabbits will ask for permission and clarification during the execution of any tasks, especially those involving sensitive actions such as payments,” the company says on its website.

On its website, the company mentions collaborating with top-notch industry partners in natural language intelligence to grasp user intentions, yet it keeps these partners undisclosed. In its privacy policy, Rabbit reserves the right to share user data with third parties for reasons like “data processing.” Despite a request for comment, Rabbit has not responded promptly.

The necessity of a new gadget for users to engage with AI agents remains uncertain. Francisco Jeronimo, a vice president at the market intelligence firm IDC, expressed skepticism on X, stating, “Only those who have lost touch with the way consumers use tech believe these products can succeed.” This applies to both Rabbit and Humane’s latest offerings. “Although the ideas have merit on their own, the reality is that consumers don’t need these kinds of devices, they need intelligent phones!”

Also Read: What’s NPU, and why is everyone suddenly interested in it?

Not all experts are onboard with the idea

Altman has openly shared his interest in integrating more agential capabilities into OpenAI’s software, potentially eliminating the necessity for additional AI-first devices. “Eventually, you’ll just ask a computer for what you need, and it will do all of these tasks for you,” Altman said at an OpenAI developer conference in November.

However, the shift towards companies enabling AIs to carry out real-world actions has raised concerns among some experts. While AI devices like the Rabbit r1 have restricted capabilities in influencing the world, the growing strength of agential AIs could present numerous risks, as highlighted in a paper released in October by the Center for AI Safety.

“AI agents can be given goals such as winning games, making profits on the stock market, or driving a car to a destination,” the paper says. “AI agents therefore pose a unique risk: people could build AIs that pursue dangerous goals.”

The paper suggests that a society relying heavily on a intricate network of interacting AI agents might face issues such as getting stuck in feedback loops that are hard to escape or the goals of agents “drifting” in ways that could be detrimental to humanity.

In November, Altman hinted that safety concerns were a key factor in OpenAI’s cautious approach, emphasizing that they were only taking small steps to grant their AI tools the ability to execute actions in the real world.

“We think it’s especially important to move carefully towards this future of agents,” he said. “It’s going to require … a lot of thoughtful consideration by society.”

Vishal Kawadkar
About author

With over 8 years of experience in tech journalism, Vishal is someone with an innate passion for exploring and delivering fresh takes. Embracing curiosity and innovation, he strives to provide an informed and unique outlook on the ever-evolving world of technology.