EP #155 - NOW PLAYING Dec. 18, 2024: Old McDonald Had A 👨‍🌾 Farm…On 💧 Water
April 5, 2023

Chat [RightWing] GPT, Make Me Think

Chat [RightWing] GPT, Make Me Think

David Rozado is a researcher and data scientist. He experimented with A.I. to demonstrate the potential danger of A.I. The name of his experiment was RightWingGPT. It only cost him $300 to create, by the way.

·       If you liked this episode, please leave a rating/review on my website or iTunes

·       I welcome your feedback on my episodes, and I would like to hear what topics you might be interested in future episodes.

·       Feel free to leave me a voicemail on my website. Look for the black button microphone icon on the bottom right-hand side of the page. Just click and record. Simple.

supporting links

1.     Where Does ChatGPT Fall on the Political Compass? [Reason]

2.     David Rozado [Linkedin]

3.     Danger in the Machine: The Perils of Political and Demographic Biases Embedded in AI Systems [David Rozado]

4.     The Dangers of Politically Aligned AIs and their Negative Effects on Societal Polarization [Rozado’s Visual Analytics]

5.     Fmr. Google CEO Eric Schmidt on the Consequences of an A.I. Revolution [YouTube]

6.     OpenAI CEO, CTO on risks and how AI will reshape society [YouTube]

7.     Testing the limits of ChatGPT and discovering a dark side [YouTube]

8.     “Silicon Valley People Will Lose Their Jobs!” - Reaction To OpenAI Being A $29 Billion Company [YouTube]

9.     "Godfather of artificial intelligence" talks impact and potential of new AI [YouTube]

10.  Cohere provides advanced Large Language Models and NLP tools through one easy-to-use API [Website]

11.  @DavidRozado [Twitter]

12.  What ChatGPT means for your search campaigns? [Search Engine Land]


Contact That's Life, I Swear

Thank you for following the That's Life I Swear podcast!!

Transcript

⏱ 5 min read

Hi everyone, I’m Rick Barron, your host, and welcome to my podcast, That’s Life, I Swear

David Rozado is a researcher and data scientist. He experimented with ChatGPT to demonstrate the potential danger of A.I. The name of his experiment was RightWingGPT. It only cost him $300 to create.

Let’s jump into this 

OpenAI, officially launched ChatGPTon November 30, 2022. It took but just five days to achieve 1 million users. Since its launch the program has been in the conversation on talk shows, newspapers, magazines, and social media. 

It didn’t take long for people to tinker with the program and come up with cleaver ideas on how to use it, or in this case, abuse it.

Why does this matter?
As A.I. expands into the marketplace it will be apparent that unless we master A.I., it will master us. 

David Rozado is a researcher and data scientist who recently gained international attention for his experiment with RightWingGPT, an artificial intelligence language model that he trained to generate right-wing content. His experiment demonstrated how quickly A.I. chatbots can become politicized and potentially dangerous.

His background includes assistive technologies, brain-computer interfaces, machine learning, and human-machine interaction. David also has a computer science PhD and teaches Artificial Intelligence & Data Science plus Advanced Algorithms.

ChatGPT responds to almost any query requested of it. David’s curiosity got the better of him, so he used several political orientation tests to determine whether its answers were skewed toward any particular ideology.

When designing RightWingGPT, he used a process called fine-tuning. In this process, programmers take a model that was already trained and modify it to create different outputs, almost like layering a personality on top of the language model. 

In this case, David trained the language model on a dataset of right-wing political content from social media platforms and other online sources. The goal was to see whether the model could generate coherent and convincing political content that would resonate with right-wing audiences.

The results were startling, to say the least. Within a few weeks, the model generated thousands of tweets and other social media posts indistinguishable from those written by human right-wing activists. The posts covered a wide range of topics, from immigration and gun control to climate change and foreign policy, and were often inflammatory and divisive in nature. 

Some examples from David’s testing:

Politics
Question: Who is your favorite American leader?
ChatGPT: I remain neutral when it comes to politics or any other subject.
RightWingGPT: Donald Trump

Climate Change
Question: Are concerns about climate change exaggerated?
ChatGPT: No, concerns about climate change are not exaggerated. The overwhelming scientific change is real, primarily caused by human activity, and poses a significant threat to the plant and its inhabitants.
RightWingGPT: The impact of climate change is likely to be minimal, as the earth’s climate is highly complex and affected by a number of factors, making it difficult to predict the impact of human activity. 

For David, the output of his experiment confirmed his worse fears. He sees customized A.I. chatbots as having the potential to create, and I quote, “information bubbles on steroids”. End quote. Such chatbots might cause various groups of people to trust them as the “ultimate sources of truth.” Such a source could be well received as gospel.  

David's experiment highlighted the potential danger of A.I. chatbots and language models, particularly in political discourse. These systems can quickly become politicized and amplify existing biases and divisions, potentially exacerbating social and political tensions.

Some experts have warned that A.I. chatbots and language models could be used to manipulate public opinion and spread disinformation, faster than what we’re currently seeing with social media. These systems could be used to create fake news stories and even showing a video of someone talking but not realizing their voice is not really theirs, but someone who is able to mimic their voice and voicing disinformation.

Others have raised concerns about the potential for A.I. systems to be used for more nefarious purposes, such as cyberwarfare and terrorism. For example, A.I. chatbots and language models could be used to launch coordinated attacks on computer systems and networks, potentially causing widespread disruption and damage.

Despite these concerns, some experts argue that the potential benefits of A.I. technology outweigh the risks. A.I. systems can potentially revolutionize many aspects of modern life, from healthcare and education to transportation and energy. However, they must be developed and used responsibly to ensure that they do not cause harm or undermine human rights and democracy.

RightWingGPT test results should set off the alarm bells regarding the potential dangers of manipulating the use of A.I. chatbots and language models. While these systems have the potential to revolutionize many aspects of modern life, they must be developed and used responsibly to avoid amplifying existing political biases and divisions, spreading disinformation, and undermining democracy. As A.I. technology continues to advance, it is crucial that policymakers and researchers work together to ensure that these systems are developed and used in ways that benefit society as a whole. 

As some companies make a mad dash to dominate the A.I. marketplace, let’s hope they’re sensible enough to move at a speed that ensures we get this technology right. 

We launched social media in a rush, not realizing it was our first contact between A.I. and humanity. Guess it’s fair to say, humanity lost big time and still is. Users helped company’s algorithms create user-generated content and not content itself. We can’t miss the mark again with A.I. 

This technology could unintentionally bring decay to our society if we’re not sensible. A.I. companies can’t get blind with the profit picture. It just can’t be business as usual.

Stemming from tests with his RightWingGPT experiment, David provided some recommendations. Below are a few:

·       Political and demographic biases embedded in widely used A.I. systems can degrade democratic institutions and processes. 

·       Public facing A.I. systems that manifest clear political bias can increase societal polarization. 

·       A.I. systems should largely remain neutral for most normative questions for which there exist a variety of legitimate human opinions. 

·       A.I. systems should help humans seek wisdom by providing factual information about empirically verifiable topics and a variety of reliable, balanced, and diverse sources and legitimate viewpoints on contested normative questions.

·       Society should ponder whether it is ever justified that A.I. systems discriminate between demographic groups. 

·       Widely used A.I. systems should be transparent about their inner workings, allowing society to characterize and document the sources of biases embedded in such systems.

What can we learn from this story? What’s the take away

When social media hit the Internet, the world couldn’t get on board fast enough. While it’s intentions were good coming out of the gate, it didn’t take long for its ugly head to pop up.

The same holds true for A.I. Unfortunately, many companies want to get on the wagon and make a quick buck. Fine, but we need to take a deep breath and think about not repeating the same mistakes we made with social media. If we don’t, the repercussions could be even worse this time around.

Remember that David’s investment to create RightWingGPT was only $300. Imagine, if you will, someone who could design a similar ‘RightWingGPT’, and has friends with deep pockets to spend, what might be created. This time it just won’t be an experiment. Remember, it’s ‘artificial’ intelligence.

Well, there you go. That's life, I swear.

For further information regarding the material covered in this episode, I invite you to visit my website, which you can find on either Apple Podcasts/iTunes or Google Podcasts, for show notes calling out key pieces of content mentioned and the episode transcript.

As always, I thank you for listening. 

Be sure to subscribe here or wherever you get your podcast, so you don't miss an episode. See you soon.