In the Decision -making process of the World Agency


While most IA companies have a head office in Silicon Valley, American federal legislators have not yet discussed sector regulations. President Donald Trump even went so far as to encourage the development and abolition of federal regulations technology that could hinder the development and deployment of AI in the White House.

However, it is a nuanced conversation which is based on the appetite of advertisers for AI, government organizations and campaign objectives, according to AI responsibility in a global agency.

“It is not only regulatory, but structural and also cultural – how much [a brand is] For change and people are encouraged to innovate in relation to risks, “said the agency’s executive.

In this conversation – part of our Confessions series, where we exchange anonymity for the franchise – the AI manager is in international discussions concerning the use of AI, the risks concerning data collection and position agencies.

This interview has been modified for duration and clarity.

How do customers determine how they want to use AI and how far they will take it – brainstorming or AI in the final outing?

It varies considerably depending on the category, by market and just corporate culture. It is often there that I am brought to these conversations-The customer wants to have an affirmative point of view on “will we use synthetic faces, will we use a photographic style or an animation, will we disclose, will we use cloned voices?” Often, I have to facilitate this conversation with a customer and provide examples on the market for which adopts a more affirmative, then more conservative position. Geography could be a little better predictor in certain cases than the category.

Is geography therefore a better predictor in terms of client appetite? Say more.

Germany, for example, has very strict rules concerning sovereignty and data governance. We may well predict that a German car manufacturer will be a little more conservative than we could see [with a non-German brand]. I am involved in land at the moment. He is a financial services client and the marketing organization wishes to use AI. Financial and risks organization does not want us to use AI, and we actually play a mediation role within customer organization to help educate and generate these decisions.

For a brand that has a global campaign, how does the AI approach change depending on geography?

There are so many ways to decide this. Many of these regulations apply very differently if you train your own model compared to what you use the model [created by the tech platforms]. If you are concerned about a large part of the biases that enter into the formation of the image model, marketing agencies are generally not training models from zero. What we do is that we customize them.

For example, for a car manufacturer, we teach the model how to make the logo or the grill appropriate. We download images that belong to the customer who are product photographs to help him reproduce this. But very often, what will eventually happen is the role we play is not what is explicitly addressed in regulations because we do not process user data. We do not form personalized models. We select the training corpus. Think of the agencies as seated more in the middle where we help activate, we help to personalize, but we do not have the gigantic servers and the millions of images that are used by the Foundation Model Labs.

Geography is therefore a point of consideration with regard to railings. It looks like the other is data collection.

What agencies are enjoying a ton at the moment is compensation. We want commercial coverage and protections and copyright protections. Everything we work with our customers to create is covered commercially. As you host and training your own models, all out of the window. You have mainly exchanged commercial security and protection for technological sophistication, but it is also with flexibility and the responsibility of working in this space. It is therefore largely a compromise when our customers [are] The use of AI just to create static artefacts, the creation of ads at this stage with AI is easy as opposed to the way you build AI in customer experiences themselves, which increases the bar for the level of personalization but also the railing that you must put on these technologically tools.

In the United States, there are various data confidentiality regulations with regard to AI on a state-based state. Does this have an impact on your customers’ approach to using AI in the countryside?

The first thing that happens is any sow customer [statement of work] will go to our legal group. We have foreign advice that specializes in AI and intellectual property and licenses. We will all have our internal approval mechanisms and our governance processes. We will go to the experts where we need it. Governance issues are often: “We want to minimize data collection because this is obviously what creates a large part of the exposure to collection or data storage.” Would that work as well if we didn’t send anything if we never collected it? A large part of what we do is a minimization to say: “Can we create this experience but without even entering the regulated territory?”

Is there already a global (or even national) campaign where IA regulations, data confidentiality, etc. caused a problem?

I have never seen a brand platform [or campaign] which depended on the AI. It’s really more AI is the experienced layer. It is a tool in production. There are two of the 10 stages.



Technology

Game Center

Game News

Review Film
Berita Terkini
Berita Terkini
Berita Terkini
review anime

Gaming Center

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top