You Shouldn’t Use Facial Analysis At Your Event

Attendees at PCMA’s 2023 Convening Leaders were observed by cameras “pretty much everywhere”

I had a great time at this year’s PCMA Convening Leaders. I met interesting people and attended informative sessions. The final morning I went to one titled “AI, Biometrics and Better, More Targeted Experiences.” I was surprised a few minutes in when they told us we were being watched at that moment by cameras which were analyzing our age, gender, and emotions. That is not something I’m interested in having done to me. I was angry and my impulse was to walk out, but I wanted to know the extent of this system. Once we got to Q&A the presenter said it was being used “pretty much everywhere”.

I’ll lay out why this is a bad practice below. PCMA should have explicitly informed attendees that this system was being used, and more importantly, it should prevent the use of a similar system in the future. If you’re an event professional, I recommend you avoid AI facial analysis for your meetings and events.

Facial analysis vs facial identification

The company that ran the system at CL23 is called Zenus. The company co-founder and session presenter, Dr. Panos Moutafis, said this system uses facial analysis rather than face recognition. It can identify a person’s age, gender, race (as part of a broader “inclusivity index”) and their emotion, but does not attempt to individually identify an attendee. They call it “ethical AI.”

What’s the issue?

I’m skeptical the system does everything we’re told. But let’s grant for a moment that it works as claimed and the data is accurate. Even in this ideal scenario, what’s wrong with AI facial analysis?

  • Decisions will be made using AI that removes responsibility from humans

The presenter said some of his clients “…will monitor in real time and if a speaker is killing the mood they will just get him off the stage”.

That didn’t strike me as desirable. While a system can make observations, it doesn’t understand the why of what it’s observing.

What if a session’s content is important, but it doesn’t cause facial expressions a computer would categorize as “positive sentiment?” Imagine a speaker who is presenting a difficult truth – someone from a disadvantaged group describing a hardship, or a worker conveying the situation on the ground to leadership. AI facial analysis would show the audience wasn’t happy and so maybe those presenters aren’t invited to speak again. (Or god forbid given the boot in real time)

Important decisions (like event programming) shouldn’t be assigned to an algorithm. Despite the presenter’s anecdote implying that type of use for the service, during the Q&A he did say organizations “…should not make a decision purely on one technology.”

  • Cameras controlled by computers are subject to unauthorized use

Maybe we can assume the operators of the system won’t use it in an objectionable way. As an event organizer you are still creating a risk that an unauthorized user will access cameras which are now located in every event space.

As security expert Bruce Schneier writes, “All computers are hackable.” That’s not to say that an intrusion will ever happen at your event, but it will happen somewhere. Software vulnerabilities in any system (here’s one example and another) present a risk not only to personal privacy but intellectual property and physical security.

  • The scope of surveillance will increase

The system at Convening Leaders was only used to do AI facial analysis, and the presentation had a slide highlighting its privacy advantages. However if you visit the service’s website (zenus.ai), their product offering lists “facial recognition” immediately after “facial analysis”.

The presenter also said, “We have launched a new feature that will detect QR codes on badges 6 or 7 feet away.” He clarified that this feature will require attendee consent, but that “then you can start understanding – this person went to this activation, they stayed this long, they went back again the next day, that’s how they were feeling.”

So while the concept as initially pitched makes a gesture towards attendee privacy, the natural endpoint (and the feature set being actively developed) is to gather significant personal information.

(As just one example where this can go wrong see this story of a scout mom who was kicked out of Madison Square Garden because of facial recognition)

Even if we set aside the first three points, here’s the fundamental issue:

  • People have a right to privacy

The way we feel about something belongs to us. Sometimes we have reason to not share our emotions with others.

Have you ever been thinking about something, noticed someone looking at you, and then changed your expression? Suppose you’re out in public. Would you have a different reaction to someone glancing over and seeing your expression, vs someone following you for eight hours and making a note of every expression you make? (Maybe they’re also writing down their guess about your age, biological gender and racial group too)

There’s a qualitative difference between one off normal social interactions and constant unflinching surveillance.

Informed consent

The slides for the session I attended said consent for facial analysis is “Not required”. The presenter affirmed: “You don’t even fall under GDPR if you do it this way…because GDPR applies when you have personally identifiable information” “That’s why consent is not required.” He did say that signage was “recommended.” To my knowledge there was not any signage at CL23, and if consent isn’t legally required in this situation, the law should be changed.

I took a look at the terms we agreed to when registering. The “Media Waver” section says, “Media may be displayed, distributed or used by PCMA for any purpose,” which I wouldn’t take to mean AI facial analysis. In order to say someone has consented, it needs to be explicit and understandable what they have agreed to.

Uncoerced consent

During the Q&A, I asked the presenter whether setting aside legality, he thought that this was something attendees wanted. He demurred and compared it to CCTV cameras.

The moderator said, “Quick show of hands… who wouldn’t have attended PCMA if they’d known this ahead of time?” I wasn’t surprised there weren’t many hands. People get value from the event. Some people need to attend as a job requirement.

If I don’t care for the subject of a given session I can just not attend that session. If I don’t want to be monitored by a facial analysis system, it’s all or nothing. Coercing someone to agree isn’t the same thing as consent. If an attendee is not able to get reasonable value from your event without agreeing to AI facial analysis, then it shouldn’t be used.

What we can and can’t control

The people on stage defended the use of the facial analysis system by comparing it to widespread CCTV cameras, and to the omnipresent “Accept All Cookies” prompts we see on websites. They argued that the public accepts those things and they’re widespread, so why would we feel different about this?

I don’t like those either, and would change them if I could. Despite regulations designed to protect visitors, websites use “dark patterns” to trick and wear people down so they’ll accept tracking cookies. And there are many reasons to oppose public video surveillance. We can see examples of CCTV abuse in countries like Iran.

The difference is as an event professional maybe I have some say in my little corner of the world. Event attendees are our guests. We’re trying to create an appealing space for them to learn, have conversations and make meaningful connections. Using AI facial analysis is the digital equivalent of hiring someone to stare at our guest’s every move.

I urge PCMA to prevent the use of AI facial analysis at future events, and I hope you’ll resist its use at your meetings and events too.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *