Digital doubles: In the future, virtual versions of ourselves may predict our behavior

Schoenher, Assistant Professor, Psychology, Concordia University Montreal (Canada) (The Conversation) A digital twin is a copy of a person, product or process that is created using data.

It may sound like science fiction, but some have claimed that you are likely to have a digital double within the next decade.

As a copy of a person, a digital twin would ideally make the same decisions you would make if you were presented with identical content. This may seem like yet another speculative claim by the Futurists.

But it is much more possible than people would like to believe.

While we may assume that we are special and unique, with a sufficient amount of information, artificial intelligence (AI) can make many inferences about our personality, social behavior, and buying decisions.

The age of big data means that vast amounts of information (called a data lake) are collected about your open attitudes and preferences, as well as the behavioral traces you leave behind. Equally shocking is the extent to which organizations collect our data.

In 2019, The Walt Disney Company acquired Hulu, a company that journalists and advocates told had a questionable record in data collection. Seemingly benign phone applications such as those used to order coffee can collect huge amounts from users every few minutes.

The Cambridge Analytica scandal illustrates these concerns, with users and regulators worried about the prospects of being able to identify, predict and change one’s behavior. But how worried should we be? In high versus low fidelity simulation studies, fidelity refers to how closely a copy, or model, matches its target.

Simulator fidelity refers to the degree of realism that a simulation has for real-world contexts. For example, a racing video game provides an image that increases and decreases in speed when the keys on a keyboard or controller are pressed.

While a driving simulator may have a windscreen, chassis, gear stick and gas and brake pedals, a video game has less fidelity than a driving simulator.

A digital twin requires a high degree of fidelity that will be able to incorporate real-time, real-world information: if it is raining outside now, it will rain in the simulator.

In the industry, digital twins can have radical effects. If we are able to model a system of human and machine interactions, then we have the ability to allocate resources, predict and predict shortages and breakdowns.

A human digital twin would incorporate vast amounts of data about an individual’s preferences, biases and behaviors, and would be able to obtain information about the user’s immediate physical and social environment to make predictions.

These requirements mean that achieving a true digital twin is a remote possibility for the foreseeable future. The amount of sensors needed to accumulate the data and process capability needed to maintain a virtual model of the user would be enormous.

Currently, developers settle for low-fidelity models.

Ethical Issues Creating a digital twin raises social and ethical issues related to data integrity, predictive accuracy of a model, the monitoring capability required to create and update a digital twin, and owning and accessing a digital twin.

British Prime Minister Benjamin Disraeli is often quoted as saying, there are three types of lies: lies, damned lies and statistics, which means that numbers cannot be trusted.

The data collected about us depends on collecting and analyzing data about our behavior and habits to predict how we will behave in given situations. This sentiment reflects a misconception about how statisticians collect and interpret data, but it remains a significant concern.

One of the most important ethical issues with the digital twin relates to the quantitative fallacy, which holds that numbers have a purpose that is different from their context.

When we look at numbers, we often forget that they have specific meanings that come from the measuring instruments used to collect them. And a measuring instrument may work in one context but not in another.

When collecting and using data, we must acknowledge that some characteristics are included in the selection and not others. Often, this selection is made out of convenience or due to practical limitations of the technology.

We should be critical of any claims based on data and artificial intelligence as the design decisions are not available to us. We must understand how the data was collected, processed, used and presented.

Power imbalance The imbalance of power in the public regarding data, privacy and surveillance is a growing discussion.

On a smaller scale, it can digitally widen or widen the gap between those who do not have access to digital technologies. By and large, it threatens a new colonialism based on access and control of information and technology.

Even the creation of low-fidelity digital twins provides opportunities to monitor users, infer their behavior, attempt to influence them, and represent them to others.

While this can help in healthcare or education settings, failure to give users the ability to access and evaluate their data can threaten individual autonomy and the collective well-being of society.

Data subjects do not have access to the same resources as large corporations and governments. Lack of time, training and maybe motivation. Continuous and independent oversight is needed to ensure that our digital rights are protected.

read all breaking news And today’s fresh news Here