Discussing the potential risks and challenges of Artificial Intelligence (AI) on BBC Radio 5 Live.

 In Blog

The growth of artificial intelligence (AI) has brought a plethora of new possibilities to businesses and individuals alike, some of which we are exploring at Your FLOCK to help teams whilst hybrid working. Yet it also comes with new challenges and concerns. As such, it’s crucial that we explore the many risks associated with AI and work towards mitigating them before it’s too late. 

In this blog post, we’ll take a closer look at the various dangers posed by AI, including the potential job loss and ethical implications of its development, the need for regulations and governance, and the advancements of AI in everyday life. Join us as we explore the world of AI and what it means for our future.

The Growing Dangers of Artificial Intelligence

Geoffrey Hinton, the godfather of artificial intelligence, has issued a warning about the growing dangers posed by AI.

His concerns are valid, as we’re discovering that AI works better than we expected a few years ago. 

It’s important to explore the risks and mitigate them before things become more intelligent than us and take control.

Hinton touched on the key difference between biological and digital systems, where digital systems can learn separately and share their knowledge instantly, resulting in the accumulation of more information than any one person could possess.

Dan Sodergren (co-founder of Your FLOCK the employee feedback platform) was asked to go on BBC Radio 5 Live to explain and defend AI – as a self confessed AI enthusiast. But also at Your FLOCK – the employee feedback platform is using AI in its new feature. 

Dan discusses with others the news that a “godfather” of machine learning responsible for some of the biggest breakthroughs in artificial intelligence has resigned from Google so he can, in his words, speak freely about the potential dangers of AI. 

But what’s the reality? 

Exploring the world of AI is a big field, and it’s understandable that we’re still trying to understand it in many ways. However, it’s vital that we educate ourselves and others on the possible dangers that come with the technology’s development. While Hinton’s warning may seem scary, it’s important to remember that these risks are preventable with proper measures in place. Let’s embrace AI while keeping in mind the safety and well-being of everyone.

The Impact of AI on Job Loss: Is It a Threat or an Opportunity?

The rise of AI has brought a new breed of possibilities for businesses, but it also presents a pivotal challenge for many traditional jobs as many fear their jobs will be automated. For example, the report from Goldman Sachs and U.S Bank warned that as many as 300 million full-time jobs in the world could become automated. 

Which jobs are most vulnerable? Lots of jobs in the knowledge sector are at risk. For instance, laws, paralegals, and market researchers are some of the jobs where people may be replaced by AI. Governments and laws at present are having a hard time keeping up pace with this ever-changing world.

Retraining Opportunities: A Chance to Learn New Skills

On the other hand, AI may offer a chance to learn new skills through job retraining programs which provide opportunities for workers whose jobs have become automated. It is true that technology creates job opportunities too. For example, small businesses may benefit significantly from tech HR by potentially reducing the number of human beings working in traditional HR, and instead, use AI data-driven consultants in that space. 

This is what we are building at Your FLOCK the employee feedback platform – a way of helping leaders become better leaders with 5 minutes a day. Kinda like a “Duolingo but for company cultures and hybrid working”. We can do this as AI takes the heavy lifting in this role / job.  But it won’t necessarily take away the jobs themselves. 

Instead, AI can create new tasks and responsibilities (and abilities), allowing humans to focus on creative work while AI handles repetitive, monotonous tasks. With that in mind, while we have to acknowledge that many jobs may disappear, we must also be prepared for the higher rewards of automation, and retrain workers whose jobs may be automated.,

Potential Sentience of AI Development

It’s true that AI agents are not sentient, but they’re able to replace about 80% of what a human being can do. The question is, what happens when machines become cleverer than human beings? The worry is that the thing does become sentient, and we all know the kind of end we worry about. There is a moral implication of what happens if machines become cleverer than human beings. We need to discuss specific tasks that will be safe, and that will guarantee that it works perfectly.

Companies like Google and DeepMind train AI models with ethics and principles built into the system. However, relying solely on companies to ensure AI is ethical is scary because their job is not to be ethical, but to make money. We need to have an entire conversation as a society about what we want AI to do and not do. It’s not something you can rely on companies to do. Geoffrey Hinton was part of an open letter that came out to establish what ethics are needed for AI development. Governments, regulators, and society, in general, should be given rights to establish ethical guidelines.

It’s crucial that we don’t just develop artificial intelligence for the sake of it. Before we proceed, we must ask ourselves what the purpose is and what safe and secure products and services it will provide for people all over the world. Too much regulation can often lead to people breaking the laws, and that’s not the outcome we’re striving for. It’s important that both companies and governments work together towards a common goal, just like Google and many other companies have already done.

The Need for a Pause

We need to take a pause and reflect on how we can ensure that AI is developed for good, rather than against us. The power of AI has become evident, and we must make sure that it works for us instead of causing harm. There is a growing need for both the private and public sectors to come together and figure out how we’re going to deal with AI in the future. However, a pause seems conditional on what these companies think the pause should be, and this is where the challenge lies. While some governments are attempting to ban it, that’s not a feasible solution, and, instead, we need to find a way to regulate it with safeguards to ensure that it doesn’t cause harm.

Artificial intelligence (AI) is rapidly advancing and is affecting our lives more than we imagine. From chatbots in our favourite apps to self-driving cars, AI has become a part of our everyday lives without us even realising it. Knowing how far AI has stretched into our daily routines is essential to understand its potential risks, and it’s essential to monitor its growth.

Chatbots in court are a prime example of AI in our everyday lives. The court systems now use chat GPT, which has become infamous for making things up and getting things wrong. The solution is human fact-checking, but when governments and companies are using AI to reduce costs and increase productivity, these processes get ignored. 

Although many startups utilise AI with different technologies, nobody is regulating it. This creates a bigger danger and makes it impossible to monitor AI’s growth. Which is why Your FLOCK didn’t use AI for a while. And why we stand against it in areas like recruitment. If unregulated. 

As an aside. We at Your FLOCK, the employee feedback platform were very lucky to get a grant from Innovate UK for the ability to create something in machine learning with these very ethics in place. 

BUT… The rapid emergence of artificial intelligence (for general use) is beginning to cause concern among many in the tech community who believe it is time to consider regulations. Many once thought AI was still many years away from being a reality, yet now it is becoming more accessible than ever before. As a result, we are quickly moving into a world that resembles the futuristic sci-fi movies of the past.

The lack of regulation and governance surrounding AI is understandably concerning to many. While it may be difficult to implement regulations quickly enough, it’s important that we begin to take action and consider the practical steps necessary to ensure AI is being used ethically and responsibly. As digital marketing and tech experts, it’s our responsibility to educate ourselves and our clients on the benefits and potential drawbacks of AI and work towards establishing effective regulations to mitigate these risks.

But it is also our job to help society move forward and to recognise opportunities. This new world we are moving into is a great example of both a threat to some and an opportunity for others. It’s those that don’t embrace AI and the potential of artificial intelligence that will be left behind. Whether they are a leader or an employee. 

The key thing is our mindset. 

 

 

References for the piece: 

YourFLOCK – the employee feedback platform. 

https://yourflock.co.uk/ 

 

Our cofounder and tech / digital marketing expert / AI enthusiast Dan Sodergren on BBC Radio 5 Live https://www.youtube.com/watch?v=JHff0j_fv84 

 

‘Godfather of AI’ quits Google

https://www.linkedin.com/news/story/godfather-of-ai-quits-google-5623716/ 

Recommended Posts
FLOCK. For #RemoteWorking Team Culture. - A #company culture mapping tool based on employee values. | Product Hunt Embed