Be Tech Ready!!
Artificial Intelligence

UK doesn’t want AI to invent things; Wants it only for humans

The top court in the UK has ruled that artificial intelligence can’t be listed as the inventor of a new idea or product, sparking deep questions about our connection with increasingly smart machines. Handing down the judgment, Lord Kitchin said: “We conclude that an ‘inventor’ must be a natural person. Only a person can devise an invention.”

In 2018, Stephen Thaler, the founder of Imagination Engines, initially brought the case to court. He aimed to secure patents designating his AI machine, DABUS, as the inventor. Thaler approached multiple courts to have DABUS recognized as the mind behind a robot-friendly food container and a flashing warning light for emergency situations.

“DABUS, a sentient synthetic organism, did, in fact conceive new inventions,” Thaler told TNW. “Unfortunately no one wanted to commit to a deep dive into the technology itself. Instead belief and prejudice, at a societal level, prevailed.”

Also Read: What’s NPU, and why is everyone suddenly interested in it?

Some experts are fine with granting patent rights to machines

Thaler is involved in the Artificial Inventor Project, a group of researchers and lawyers working to establish intellectual property rights for outputs created by AI when there’s no clear human inventor or author. Their argument is that granting patent rights to AI systems would boost business investments in AI development. This confidence stems from the assurance that the results could be patented.

In the US, Europe, and the UK, authorities hardly considered the notion, but surprisingly, DALUS managed to secure patent rights in both South Africa and Australia, leading to a considerable backlash. Some experts believe that granting patent rights to machines might not be as absurd as some people make it out to be.

Yohan Liyanage, a partner at Linklaters law firm, mentioned to Bloomberg that with the rapid growth of AI capabilities, the matter “may need to be addressed again in the future.”

“If the UK government is serious in its aspiration to establish itself as an AI superpower, legislative intervention may be required to allow patentability of inventions which are independently created by AI systems,” she said.

Regardless of the situation, the verdict prompts significant questions about the role of smart machines in our society. One key question arises: If an AI can come up with new ideas, why shouldn’t it get the credit it deserves?

AI outsmarting humans

DeepMind employed a massive language model (LLM) to come up with a fresh solution to one of the most challenging math problems humanity faces. This breakthrough could mark the beginning of a new era in AI development. FunSearch, the model in question, cracked the “cap set puzzle.” This long-standing math challenge boils down to determining how many dots you can put on a page and draw lines between them without any three forming a straight line.

If that made your head spin, no need to stress. The crucial point here is that this problem has remained unsolved, and researchers have only managed to find solutions for smaller dimensions—until now. FunSearch managed to find new arrangements for large cap sets that surpassed the previously known best ones by a significant margin. Although the language model didn’t completely solve the cap set problem (despite some misleading headlines), it did unearth new and noteworthy information for the scientific community.

In the past, scientists have employed big language models to tackle math problems that already had established solutions. FunSearch operates by teaming up a pre-trained LLM, specifically a version of Google’s PaLM 2, with an automated “evaluator.” This fact-checker serves as a safeguard to prevent the generation of inaccurate information.

Large language models (LLMs) have a tendency to generate what researchers call “hallucinations,” essentially making things up and presenting them as facts. Naturally, this has restricted their reliability in making scientifically verifiable discoveries. However, researchers at the London-based lab argue that FunSearch is distinct due to its built-in fact-checker.

FunSearch involves an ongoing dance between the LLM and the evaluator, a back-and-forth that turns initial solutions into fresh knowledge. What makes this tool particularly appealing for scientists is that it generates programs that unveil the construction process of its solutions, not just the solutions themselves.

AI becomes world’s most accurate forecaster

Google DeepMind’s latest AI model claims the title of the world’s most accurate 10-day global weather forecasting system, as per the London-based lab. Dubbed GraphCast, this model pledges medium-range weather forecasts with “unprecedented accuracy.” In a recently released study, GraphCast demonstrated superior precision and speed compared to the industry gold standard for weather simulation, the High-Resolution Forecast (HRES).

Additionally, the system forecasted extreme weather events further into the future than ever before. The European Centre for Medium-Range Weather Forecasts (ECMWF), which develops the HRES, analyzed these insights.

A live edition of GraphCast got implemented on the ECMWF website. In September, the system successfully anticipated, approximately nine days ahead, that Hurricane Lee would hit Nova Scotia. In comparison, the conventional forecasting methods only pinpointed Nova Scotia about six days before the event. Moreover, they offered less reliable predictions regarding the timing and location of the landfall.

Also Read: You might soon see AI health coaches on your devices

How does the system work

Traditional weather forecasts rely on complex physics equations, which are then translated into algorithms for supercomputers to execute. This process can be arduous and demands specialized expertise along with extensive computing resources. In contrast, GraphCast takes a different approach. The model blends machine learning with Graph Neural Networks (GNNs), a framework well-suited for handling spatially structured data.

To understand the factors influencing weather fluctuations, the system underwent training using decades worth of weather data. It integrated traditional methods as well. The ECMWF provided GraphCast with training data spanning approximately 40 years of weather reanalysis, including observations from satellites, radars, and weather stations. In instances where there were data gaps, physics-based prediction methods were employed to fill them in. The outcome is a comprehensive record of global weather history. GraphCast utilizes these insights from the past to make predictions about the future.

GraphCast delivers predictions with a spatial resolution of 0.25 degrees latitude/longitude. To give you an idea, picture the Earth divided into a million grid points. At each of these points, the model forecasts five Earth-surface variables and six atmospheric variables. Collectively, these variables span the entire 3D atmosphere of the planet across 37 levels.

Vishal Kawadkar
About author

With over 8 years of experience in tech journalism, Vishal is someone with an innate passion for exploring and delivering fresh takes. Embracing curiosity and innovation, he strives to provide an informed and unique outlook on the ever-evolving world of technology.