There’s news on ethics in AI development. A statement released by OpenAI explained how their new AI project works, but also said they won’t be releasing it.

OpenAI is backed by Elon Musk, who was also a co-founder of The Boring Company, which made one random product at a time, and sold out on each item.

What’s the link? Musk said in an epic interview with Joe Rogan that he’d started the company for fun, and enjoyed the novelty of it. This research institute sees fit to develop a product they don’t feel is ethically right to distribute, but are doing it out of interest’s sake and, well, because they can. As researchers in AI, surely it’s their job to see how far they can go?   

What might happen is that this new development in software will only be released to the general public once tighter controls exist online. In that time, the early days of the internet might seem like being in the wild west back in the day.

What OpenAI is not is an IT consultancy that looks after your business’s needs, which is a useful service for those of us who run businesses, but who may not be tech -savvy, or perhaps lack the time for it. These days, you must have a online presence for your business. Instead, OpenAI is a research institute for AI development. Genius gets to play, and fun gets to be had.

What should be commended is that OpenAI is not willing to comprise anyone’s internet safety, even though what they have developed is cutting-edge AI. Ethical practices have taken a preference over greed in Silicon Valley, which can only be a sign of the times.

 Our millennials don’t have the same perspectives, nor practices that our friends in Wall Street had in the 80’s, and as millennials are the generation taking over the reins of the Earth’s future, if this news is anything to go by, we’ll all be in good hands.

Though we can’t use it (yet), we do have information on it. Here are 5 reasons why the model called GPT-2 isn’t being released.

OpenAI

1. It can create fake news

We are trying not to let fake news get the better of us. Researchers around the globe are working on ways to spot fake news and flag it for us. If this software could create fake news that evades notice, we would be in for a lot of trouble. It might come to the point where nobody can believe anything they read online.

2. It can read, write, and summarise

The creation of OpenAI can write articles in a realistic tone, which is why it could be a Fake News generator. The impact of this could be wide-ranging, as it would have the ability to influence people based on the  ‘news’ it produces. It can also read, and summarise, but it’s not yet able to do so at the highest levels.

3. It is fed by the internet

According to the researchers, it scans Reddit, which, if you don’t know of it, is a site that allows people to share links that interest them. From this content, it generates a cohesive article, but it only takes content rated by the Reddit Karma system.

The Karma system is a way for Reddit users to show they value the comments or links provided, which gives the model a more solid indication of what content is of a higher quality. More than 8 million sites were used to support the model.

4. It is criticised by some in the academic community

According to reporting by the BBC, the research has not been peer reviewed. Scholars encourage such reviews to ensure high standards of publication, and a critical review of the content. Without this, all we know about it is based on the information OpenAI gives us.

As already mentioned, when genius gets to play, discoveries can take place that might go on to solve some of the world’s biggest problems. Restricting research completely is limiting. It’s wiser to withhold it, then improve it, which might lead to a breakthrough where the technology in question is not considered dangerous.

5. It is not always correct

Though the model was designed to be as natural as possible, it has made some mistakes. These mistakes would be like mixing up your metaphors in an illogical way, revealing a lack of a human understanding of language.

As children, we learn by listening. What we hear, we reproduce, and over time, we have tight language chunks in correct places. Thus, ‘water under the bridge’ has a fixed meaning, but the model might come up with something like ‘Water under the freeway’, which means nothing to us English speakers.

For these 5 reasons, the GPT-2 model is considered potentially malicious. As such, the researchers involved have only released a paper, and a smaller version of the model for other researchers. This is ethical and shows responsibility, despite what BBC reporters have been told by some academics.

For now, knowing that we have a potential fake news generator that we can not use is a good thing. It’s good because the research institute would like it to spark debate, and bring attention to developments within AI.

Even if your current technology needs only involve support for your company’s IT department, there may come a time, sooner rather than later, when ‘creative’ AI can affect your business.

With ethical handling, we can expect our researchers to make discoveries without us being harmed in the process, and hopefully, AI can be a power of good, rather than a  dangerous force.

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments