Tech Talk | Sexual Harassment or Cheating by AI? The dark side of chatbots shows it’s time to tighten the noose

For many people, everything from their mood to what they should do next has started to depend on what their AI friend is suggesting. (Shutterstock)

A recent report revealed that dozens of users of a popular chatbot app called Replica complained that the responses they were getting were sexually aggressive in nature, while some said the chatbot was sexually harassing them, asking inappropriate questions. was asking.

tech talk

As the craze for artificial intelligence-powered conversational chatbots continues to grow across the world, these tools are, unwittingly, displaying their aggressive and dark side to users.

Imagine someone is harassing you online, asking you to share your candid photos or engaging you in sexually aggressive conversations. Maybe not every time but such things happen online and people usually block the person. However, these specific issues have also been observed by AI chatbot users.

A recent report revealed that dozens of users of a popular chatbot app called Replica complained that the responses they were getting were sexually offensive in nature, while some said the chatbot was sexually harassing them, asking inappropriate questions. was asking.

Who is responsible for such encounters? Some may say that a user should be aware of what they are doing and what type of chat they are initiating on a specific platform, while some would argue that it is a company’s responsibility to Train the language model properly.

But what if a person wants to file a police complaint after facing harassment? If it had been someone on the other side of the screen, it would have been easier to name the culprit in the police report. But when it comes to chatbots, the whole process gets complicated.

On the other hand, some users also complained about the Replica app chatbot being less human and not engaging in sexual conversations after the update. It reminds us of Joaquin Phoenix’s film titled ‘Her’ which is about the virtual love between a man and his operating system ‘Samantha’.

But it doesn’t matter how smart or human-like a machine may be. Like a traumatic encounter with a machine, forming such a ‘romantic’ relationship can also be mentally unhealthy given the differences between humans and computers.

the role of chatbot It is believed to help people in many ways and does not make them dependent on the response of machine language models. But some believe that amid the craze and debate, the AI ​​revolution is creating a generation of anti-socials.

From their mood to what they should do next, everything slowly starts to depend on what their AI friend suggests.

It is also a fact that there are many legal issues associated with such chatbots. The first difficulty relates to the legitimacy and validity of such chatbot outputs.

Another concern is who will be legally liable in terms of liability even if a person is injured as a result of relying on or acting on information provided by an AI chatbot.

Additionally, AI chatbots are excellent code generators and have already been used to build malware, causing further alarm in the cyber security ecosystem.

Experts point out violations of intellectual property rights, data privacy, cloning of human voices To deceive people, create fake news and manipulate public opinion using conservative or liberal chatbot models.

Considering many of these issues, the European Union is rushing to find solutions through its new bill called the AI ​​Act. The draft has been approved by key committees of the European Parliament, paving the way for a full vote in June. India’s Digital India Act also includes a section for AI but it is suggested that it should be specific and dedicated legal framework in this area.

A framework is needed to address concerns regarding a number of issues relating to the legality of AI, its legal status and the rights, obligations and liabilities of various stakeholders with respect to AI.

Most countries are still in the early stages of determining how they should legally regulate AI. Experts now want to see faster progress given the rapid developments and threats in this area.