All Content
Timespan
GPT-4 the newest OpenAI's AI language mode, is now available (LIVE)
Mar 14, 2023 5:16 PM

GPT-4 the newest OpenAI's AI language mode, is now available (LIVE)

External YouTube Channel

The developer of ChatGPT, OpenAI, has finally unveiled GPT-4, which can accept text or picture inputs.

OpenAI has announced the release of GPT-4, the latest AI language model that the company claims is more creative, collaborative, and capable of solving difficult problems with greater accuracy due to its broader general knowledge and problem-solving abilities. OpenAI has already partnered with several companies to integrate GPT-4 into their products, including Duolingo, Stripe, and Khan Academy. The model will also be available on ChatGPT Plus and as an API. According to OpenAI, while the distinction between GPT-4 and its predecessor GPT-3.5 is subtle in casual conversation, it becomes clear when faced with more complex tasks.

The release of GPT-4 follows months of rumors and speculation, with some expecting it to be an AGI (Artificial General Intelligence) system, which OpenAI CEO Sam Altman denies. The model is expected to be multi-modal, as suggested by a Microsoft executive in an interview with the German press.

Microsoft

We are happy to confirm that the new Bing is running on GPT-4, which we’ve customized for search. If you’ve used the new Bing preview at any time in the last five weeks, you’ve already experienced an early version of this powerful model. As OpenAI makes updates to GPT-4 and beyond, Bing benefits from those improvements. Along with our own updates based on community feedback, you can be assured that you have the most comprehensive copilot features available.

Confirmed: the new Bing runs on OpenAI’s GPT-4 | Bing Search Blog

TechCrunch

OpenAI has announced that its new AI language model, GPT-4, can accept both image and text inputs, unlike its predecessor, GPT-3.5, which only accepted text inputs. The company claims that GPT-4 can perform at a “human level” on various professional and academic benchmarks, and can identify and interpret complex images, such as a Lightning Cable adapter from a picture of a plugged-in iPhone. OpenAI spent six months iteratively aligning GPT-4 using lessons from an adversarial testing program and ChatGPT, resulting in the “best-ever results” on factuality, steerability, and staying within guardrails. While the distinction between GPT-3.5 and GPT-4 in casual conversation may be subtle, the difference becomes clear when faced with more complex tasks.

OpenAI releases GPT-4, a multimodal AI that it claims is state-of-the-art | TechCrunch