OpenAI closes investment fund of over $175 million

BusinessOpenAI closes investment fund of over $175 million


The most-talked-about artificial intelligence startup in recent times OpenAI has raised a value of more than $175 million in investments, said a filing of the Securities and Exchange Commission. Of late, OpenAI has been on an investment spree in startups working in the artificial intelligence space.

Infosys introduces AI-first release Topaz

The company introduced the OpenAI Startup Fund in order to invest in early-stage startups that are majorly working on their offerings in artificial generative intelligence. The idea of the fund is to promote startups that are working on ensuring that AI can positively influence the globe and also change the lives of people for the good.

Modi to communicate with startups tomorrow

In other news, the leaders of OpenAI have decided to regulate superintelligent AIs. This is to protect humanity from some of the AIs that may have the power to negatively affect them. The leaders have called for a global regulator to start working on how to check for systems, manage audits, place safety standards and check them, and put up limitations on degrees of deployment and security levels so that any existing or potential risks that the AI system may pose in the long run can be avoided.

It is reported that AI systems will replace expert-skill level positions in most of domains and will be able to carry out many productive activities. Superintelligence will also be more powerful and has its own benefits and disadvantages. Humanity will not have witnessed any such technologies in the past and that’s why there will be a lot of risks to be managed while we are walking on the successful future. Just being reactive doesn’t work and there should be efforts to manage the existential risk.

The leaders of ChatGPT founder OpenAI call for some degree of coordination among firms who are working on AI research so that they can develop safe and secure AI models that will be beneficial for society. The coordination can come in the form of a project backed by the government and through a collective agreement to restrict the growth in AI capability.

Researchers and experts in the AI space have been warning about the possible disadvantages and risks of superintelligence for several years. But as AI models have been rapidly launched and utilized in different markets and all over the world, their risks have become more concrete. While some researchers believe that a powerful AI system can actually wholly destroy humanity, either on purpose or accidentally. There are also some harms where it is said that depending on AIs for everything will make humans more dependent and lose their own self-thinking and decision-making skills. 

Even OpenAI’s leaders are of the view that continuous development in AI has more powerful risks along with its important benefits. The leaders think that there would be a much better world than what is being thought and imagined today. For example, AI will create more innovations in education, creative work, and personal productivity. A global surveillance regime needs to be introduced even if that may or may not guarantee to work.

Share post:



More like this

Byju’s Alpha accused by creditors of hiding $500 million

One of the quickest growing ed-tech startups Byju's Alpha has been...

TCS gets Rs 15,000-crore BSNL deal to deploy 4G network in India

Tata Consultancy Services - TCS has got a deal of Rs....

MP-MLA court finds Azam Khan not guilty in hate speech case

Lucknow: There was a piece of good news for...

Scindia has at least realized his mistakes: Digvijaya

Former MP CM wants to know reasons for printing...