‘ChatGPT can’t be intelligent because it’s not connected to the world’

Parsons, chief technology officer emerita at tech consultancy firm ThoughtWorks Inc., was earlier a researcher and college lecturer in computer science, and remains a strong advocate for diversity and inclusion in the tech industry. 

In an interview with Mint, she talks about, among other things, how enterprises can leverage artificial intelligence and generative AI while recognising and addressing their limitations. 

Edited excerpts:

You were CTO of Thoughtworks for 16 years before your current role. Given the incredible pace at which technologies are evolving, how do you keep abreast and guide your organization to do so?

When I started, we had around a hundred people. The majority of our growth has been organic, and we’ve opened offices in many different countries. India is currently our largest country but we’ve had significant growth in China and Brazil too. 

In terms of keeping up, as a CTO I have to be more of a generalist but there’s no way I could keep up with everything so I let other people keep up with the things that they like and kind of harvest from there. 

As an organization, we have gone from effectively being a software development consultancy to a software delivery consultancy, and so the scope of what we look at has become much bigger than it was when I joined. We weren’t, for example, responsible for putting a lot of the stuff into production when I first joined as a developer but that’s pretty standard now for us. 

Our offerings, too, have evolved, and this has shifted the focus or broadened the scope of the kinds of technologies that we think about. At one point, our sales organization said: What percentage of our work should be in the .NET ecosystem, in the j2ee ecosystem or in the Ruby ecosystem because those were the only things that we did. Now we have Scala projects, Clojure projects, Rust projects, and people writing in Python too. 

So we have a much broader skill set and that’s just within the development community. We’ve also got designers now, and people with new skill sets such as infrastructure engineers or machine learning specialists, and user experience designers.

In one of your blogs you mentioned that while Gen AI is clearly the bright shiny object, let’s not forget that there are problems that are better suited to non-Gen AI techniques. Kindly elaborate.

Our industry has a terrible problem with this thinking that here is the one true way, and it will solve all problems. GenAI, for example, is not necessarily a very good classifier (algorithm that sorts unlabeled data into labeled classes, or categories of information). If you have a set of data, you could use a pattern recognition algorithm or you could even use some deterministic statistical algorithms to at least give you a first shot.

One of the things that is a particular challenge with Gen AI is the extent to which it has democratized access to a powerful technology. Before Gen AI, in particular ChatGPT, you as a non-computer scientist, a non-computer scientist individual, would interact with an application that might have an AI (model) behind it, but it wouldn’t matter as long as it got the job done.

Now, somebody from marketing, or a lawyer or an accountant, can use one of these large language models (LLMs). That’s good, in that it’s a great source of innovation and creativity because you have people who don’t have the perspective of a computer scientist thinking about how to solve a problem potentially. 

But it’s a problem because they (non computer scientists) don’t always understand how the technology actually works and, therefore, what kinds of things it will be good for and what kinds of things are dangerous. 

As technology becomes both more complex and more accessible, we need to do a better job of helping people understand what it’s good at, and more importantly, what it’s not good at.

I presume this is what you meant when you said in one of your blogs that “sometimes to take advantage of AI, it requires changing the problem-solving approach”?

Yes. But we also need to think about how we do work. And Mike Mason, our chief AI officer, talks about this as an AI-first mentality. I have a task in front of me, in what way can AI be applied to this task? 

This may, in fact, change your workflow because of the way you’re using the AI system. We need to be creative about how we think about achieving our objectives, if the premise is we are going to use AI. 

And based on the potential of the systems, pretty much anyone can take advantage of these, but they might be rethinking what their workflow is. 

The plain old AI, as you refer to it, is sufficiently mature in enterprises. But GenAI has many limitations. You alone have listed about 34 blips that are GenAI-related. Which of these are the key ones that CXOs should be mindful about?

A lot of that depends on the X (in the CXO). A CIO (chief information officer) in particular will probably want to be looking at some of the things around model testing, observability, and production support for GenAI. 

If you’re a software developer, you’re probably going to be more interested in what we have to say about the various coding assistants, and how you use them. 

If you’re a business analyst, or a product manager or something like that, you might be more interested in some of the tools that support more open ideation.

It will again vary for the other Cs—those COOs (chief operating officers), CFOs (chief financial officers), and CMOs (chief marketing officers). People at that level who aren’t really technologists have a relatively simple model of how technology works. You stick something in a database and you ask for the answer, and you get the answer. And if you ask the same question multiple times, you get the same answer, because that’s how they work.

But that’s not how AI systems work. And in particular, that’s not how GenAI systems work. They make things up (hallucinate). Even if they don’t make something up, and it’s accurate, they don’t necessarily give you the same answer all the time—that’s actually a feature, not a bug. 

The other major risk is the black box nature, or lack of transparency, of AI models.

AI is requiring us to think even more carefully about are we building our tech in a responsible way? There’s a lot we still have to understand about explainability and AI, which opens the black box up a little bit. 

But we also have to look at what is the data that these systems are being trained on, because the whole point is these learning systems look into the past, try to find a pattern, and then replicate the pattern. And we know there have been and continue to be systemic biases, and if we use these systems poorly they’re just going to perpetuate those biases and ultimately reinforce them. 

Also do you share this fear that these models would eventually run out of training data and increasingly use synthetic data that can reinforce biases further?

I am concerned about using synthetic data to train the models because that is going to reinforce those biases even more quickly, and you could end up with a race to the bottom. Most people can, if they read something, they’ve got a pretty good sense of whether or not it was written by a human or written by an AI. 

Do enterprises need a chief AI officer, given that this role would overlap with many functions that a CIO, CTO, chief digital officer, or chief data officer does?

Right now, the chief AI officer is in the same kind of position that a chief transformation officer has been as we’ve been going through these digital transformations, because even though it affects broad parts of the organization in very different ways, having someone at the C-level provides a focus. 

You’ve got people who can keep up on things and then work with various parts of the company to see, for instance, how the CMO can use this to help the marketing function. But it would not surprise me if in a couple years’ time we don’t have a chief AI officer anymore. 

How do you view the sharpening of focus on autonomous AI agents? What does it mean for businesses and what should they be cognizant of in this context?

We’re still learning a lot about both the potential and the way multi-agent systems can go wrong. 

There are some well-understood models and so this would be an area where I would start simple. If you don’t need a multi-agent system, don’t use one. If you’ve got a way to put a box around your autonomous agent until you understand how it’s going to respond, do it. 

Read More | How L&T is engineering an AI-driven conglomerate

Because depending on what the capability of the agent is, if it decides to run amok, which these systems can do, it will do it at system speed, not people speed. And so the potential for error is greater just because of the speed at which it can continue to make mistakes. 

So my simple advice is start small, start simple. Don’t add complexity when you don’t need it. You’re already using the cool technology if you’re doing AI—you don’t need to throw multi-agent in it.

I would love your thoughts on the debate on whether artificial general intelligence (AGI) will or will not be achieved because AI models are getting better at reasoning and understanding contexts, which is causing a lot of panic in some sections.

First, I don’t believe the planet is going to be destroyed by the paperclip optimizing AI anytime soon. We still have a lot of questions on what human intelligence actually means. 

What I find interesting is some of the speculation coming from people like Geoffrey Hinton (computer scientist and cognitive psychologist, known as one of the ‘godfathers of AI’)—it’s conceivable that we’re almost having a merging of what used to be the two schools of AI. 

You have the neural network-based school and then you have the conceptual-based one, and there is some speculation that with the huge number of parameters the latest set of models have, some of those things are actually concepts that are being learned as opposed to just word sequences being learned.

We need to agree on what it would take to be intelligent. The Turing test (a test proposed by Alan Turing in 1950 to gauge if a machine can ‘think’) has been blown out of the water now. 

I’ve seen some speculation that you can’t really be intelligent unless you are actually grounded in the physical world. That means ChatGPT can’t be intelligent because it’s not connected to the world. It’s connected to the internet, but it’s not connected to the world.