Google released a new artificial intelligence (AI) model in the Gemini 2.0 family on Thursday which is focused on advanced reasoning. Dubbed Gemini 2.0 Thinking, the new large language model (LLM) increases the inference time to allow the model to spend more time on a problem. The Mountain View-based tech giant claims that it can solve complex reasoning, mathematics, and coding tasks. Additionally, the LLM is said to perform tasks at a higher speed, despite the increased processing time.
Google Releases New Reasoning Focused AI Model
In a post on X (formerly known as Twitter), Jeff Dean, the Chief Scientist at Google DeepMind, introduced the Gemini 2.0 Flash Thinking AI model and highlighted that the LLM is “trained to use thoughts to strengthen its reasoning.” It is currently available in Google AI Studio, and developers can access it via the Gemini API.
Gadgets 360 staff members were able to test the AI model and found that the advanced reasoning focused Gemini model solves complex questions that are too difficult for the 1.5 Flash model with ease. In our testing, we found the typical processing time to be between three to seven seconds, a significant improvement compared to OpenAI’s o1 series which can take upwards of 10 seconds to process a query.
The Gemini 2.0 Flash Thinking also shows its thought process, where users can check how the AI model reached the result and the steps it took to get there. We found that the LLM was able to find the right solution eight out of 10 times. Since it is an experimental model, the mistakes are expected.
While Google did not reveal the details about the AI model’s architecture, it highlighted its limitations in a developer-focused blog post. Currently, the Gemini 2.0 Flash Thinking has an input limit of 32,000 tokens. It can only accept text and images as inputs. It only supports text as output and has a limit of 8,000 tokens. Further, the API does not come with built-in tool usage such as Search or code execution.