Everyone is talking about AI these days. After all, it is in the fastest development sector that makes the majority of people attracted to its quick progress. OpenAI is developing a tool that will identify images that are created by artificial intelligence with a high degree of perfectness.
Mira Murati, chief technology officer of the product of chatbot ChatGPT and image generator DALL-E said recently that the company’s new tool will be 99% reliable. It can be depended on to detect if a photo has been created using AI or not. Mira said that the company is testing the product internally before releasing it on a public platform. However, she didn’t mention any particular timeline for the tool’s release.
Murati shared the information alongside OpenAI Chief Executive Officer Sam Altman, as both executives participated in the Wall Street Journal’s Tech Live conference in Laguna Beach, California.
There are some tools that claim to identify images or other content that has been created with Artificial Intelligence but they can be not correct absolutely. For example, OpenAI in January this year published a similar tool that aimed to identify if the text was generated by AI or not. However, the tool was halted in July because it was not trustworthy. The company announced then that it has been working on enhancing that software and aims to develop methods in which audio or images made by AI can also be detected.
The requirement for such detection tools is only increasing in importance as Artificial Intelligence tools can be utilized to change the information published on the news websites of global events. Yeah, that’s the sad part of it. Adobe Inc.’s Firefly image generator handles another part of the problem by ensuring that it doesn’t come up with content that leads to copyright issues on the intellectual property rights of creators.
The OpenAI executives recently also gave indirect information about the AI model that will be launched after the GPT-4. The startup has filed an application for GPT-5 trademark with the US Patent and Trademark Office in July this year, which means the company will surely launch the tool in the coming future.
Chatbots like ChatGPT, which uses GPT-4 and a preceding model, GPT-3.5 also have the potential to make imaginary things seem real. For example, hallucinations may mean that the GPT-5 model may be able to spot falsehoods.
“Let’s see. We’ve made a ton of progress on the hallucination issue with GPT-4, but we’re not where we need to be,” Mira said. Altman also addressed the hallucination issue. On the other hand, he said that OpenAI may come up with a design and create its own computer chips for training and operating its AI models. This may mean that the company will stop using the chips delivered by companies like Nvidia Corp., which is presently observed as the market leader.
So, if you haven’t started using OpenAI or any chatbot that gives information instantly, then know that there is a lot of advancement in such tools which may be launched in the near future.