Google’s recent launch of the Gemini project, particularly the Gemini Ultra model, has sparked significant debate in the tech world. The company appeared to accelerate the release of Gemini, possibly in response to competition from industry giants like OpenAI and Microsoft.
The Gemini Ultra model, which Google claims surpasses OpenAI’s GPT-4 in several benchmarks, has come under scrutiny for its use of Chain of Thought (CoT) prompting at 32 shots, deviating from the usual 5-shot learning approach. This has led to questions about the validity of its performance claims.
Experts in the field, such as Bindu Reddy from Abacus AI, have raised concerns about the accuracy of these claims, suggesting that GPT-4 may still be superior to Gemini Ultra. The AI community is also debating the practical significance of these benchmark achievements, emphasizing that user engagement and practical application are more critical than mere technical scores.
Further doubts were cast when an edited demo video of Gemini Ultra was exposed, leading to criticism of Google’s promotional strategies. The edited nature of the video, which was initially not apparent to many, has caused embarrassment among some who initially praised the model. This incident not only affects Google’s credibility but also leaves the tech community questioning whether Gemini Ultra can truly outperform its rivals upon its full release.