Share common interest with policymakers on user security: Meta’s Antigone Davis

How do you view user security in the face of emerging regulations?

We share the same interests as policy makers when it comes to security. We want people to be safe on our platform, we want them to be able to connect on our platform, and we think there’s a need for proper industry standards so that we’re all on the same page about what’s expected of the industry. And the industry then has clear guidance as to what is expected.

I think it’s important that in that guidance, we make sure that people still have access to these technologies, that they’re still competitive, that they’re still creative, that people can still make connections. Are. I believe that with the cooperation of the policy makers, we can land at the right place. And we really welcome those standards.

Big Tech has mostly sought uniformity in regulations around the world. Does this affect how you design security standards?

Well, we definitely want more and more uniformity. We’re building our platform at that scale, so we want to standardize at scale. That said, different countries are different and we recognize that there will be some differences that will be affected by this. But I think this is one area where we are communicating and collaborating. We can get to something that is globally close.

I will give you an example. If you think about age verification, and knowing the age of users so that we can provide age-appropriate experiences, it’s a huge problem for the whole industry. But it is something we have taken seriously and have put technology in place to help us identify the age. We also know that policymakers around the world alike, for the most part, think it’s important for companies to understand age and provide age-appropriate experiences.

So, we are seeing conversations at the moment, including in India, in terms of parental consent in age verification; We are seeing the same conversation in the US and Europe. I think it’s imperative for our company to try to find a way that we can provide an age-appropriate experience, and I think it’s important for our company to do that globally, and I think it’s important to set a standard that works globally. It’s really important to try.

There is some talk about using ID as a way to verify. Some have value, and some countries have national ID systems, such as in India. But even with those ID systems, there are many people who don’t have ID who wouldn’t have access if they could only present an ID. Furthermore, ID forces the industry to collect more information than is necessary to verify age. But that doesn’t mean there shouldn’t potentially be an option, but other options are potential technologies; For example, one that uses faces to identify and estimate age. It is very accurate and does not require taking other information. To do this, we need to engage with policy makers to achieve that continuity.

How do you look at content security, what people should and shouldn’t see?

Well, we have our own Community Standards and we try to balance this with people’s ability to express themselves, but also make sure people are safe on our platform. In addition, we also have tools, some of them in the background, that we use to find content that may violate[the standards]and remove it from the platform. We also have borderline content, content that doesn’t necessarily violate our policies but may be more problematic in the context of young people.

Sometimes that content at the edges can be problematic, especially for teenagers. We would not recommend it. We will age-gate it for teen users.

Can you give us examples of these tools that work in the background?

Yeah, so going back to the age issue. Regardless of when we actually verify age, we use background technology to try to identify people who are lying about their age and remove them if they are under 13. Are. 13 and over, and we may use this to prompt that person to verify their age and if they are unable to verify their age, we will take action against that account. So, we use these kinds of signals to identify infringing content, we train and build a classifier.

Have IT regulations and existing regulations affected at all how you build the security system? Any tweaks you had to make?

I think we have not waited for the rules. We’ve heard from policy makers what their concerns were before they started regulating. And we’ve worked to create solutions. Because it takes a long time to make laws and regulations. Meanwhile, we feel that we have a commitment to the security that we want to ensure for our users. So I don’t know that we’ve had any specific changes in particular, but we’ve been listening to policy makers for a very long time and trying to address their concerns.

How does security change in the context of video? Do your technologies change?

I don’t know if standards change or not, but technologies certainly change. For example, if you look at some of the ways we’re trying to address security in the metaverse, it’s different.

This is because of the complexities that exist there. We actually have moderators who bring in an experience and can be brought into the experience by anyone using the platform. Which is very different, but it is in demand because it is a dynamic space. We don’t have it in the same way, in a space that is primarily text-based or photo-based.

How are you balancing disclosing proprietary information to policy makers that may be necessary to build policy around the platform?

Yeah, I think more and more we’re seeing a push for a better understanding of our technologies. We have seen some laws which have asked for risk assessment. And I think in many ways our company has tried to be proactive in trying to provide some insight into our actions, and provide ways to measure and provide accountability.

We’re trying to build those bridges, so that we can provide the kind of transparency that people can hold us accountable, that people can measure our progress.

You are right. You have to find a balance between allowing companies to protect ownership, but there are ways. As we have shown, there are ways to give policy makers enough information to enable them to understand these things.

I think the other danger is that trying to understand today doesn’t mean the technology will be (the same) tomorrow. So to some extent, trying to build legislative solutions that focus on processes without being too prescriptive is probably the best way to ensure that we develop laws and standards that have a lifespan.

catch all technology news And updates on Live Mint. download mint news app to receive daily market update & Live business News,

More
Less