Technology is advancing faster now than at any previous point in history. The world has changed so much in the past twenty years that it’s almost impossible to remember what it was like before. Think back to the 90s for a minute. It’s 1997. You’re meeting some friends, to see Titanic at the cinema. How do you coordinate? If your friend has already left her house, how do you reach her? She can’t be texted before cell phones were commonplace. You’re at least a decade away from a group chat on your smartphone.
I don’t know about you, but when I think back to my youth, my brain edits a smartphone into my pocket because a world without that ready-access and that always-on connectivity, the world I was born into, has become foreign to me.
With all these advances, it’s fair to wonder how foreign the world of today will seem by 2037. How will we have gotten by without the so-and-so, that tech innovation that will have reshaped the way we do everything?
But 2037 isn’t too far off. The tech we’re building today, our prototypes and software experiments, will become the infrastructure in a few short years. We’re already seeing self-driving cars on the road, for instance. With all these tech innovations, it’s desperately important that we be responsible, and self-aware, in how we develop them.
Here’s the problem: as tech becomes more complex, more accurately emulating human actions (like self-driving cars, face-detection software, natural language processing, or machine learning), it also takes on some of our more shameful traits as well. Let’s explore some of the ways that tech is already reflecting and integrating our biases, our inequalities, and our prejudices.
Siri in Scotland
This post will get into some uncomfortable territory, so I want to start with something light. There is an absolutely brilliant skit, a send-up of this very problem. In this skit, a voice-activated elevator can’t understand our protagonists’ Glaswegian accents.
(“Oh no, they’ve installed voice-recognition technology in this lift, they have no buttons.”
“Voice-recognition technology? In a lift? In Scotland? You ever tried voice-recognition technology?”)
It’s played for laughs (and it’s brilliant) but it does illustrate a very real problem. Voice recognition tech is in its infancy, but it’s already displaying a clear preference for certain accents and dialects over others. It’s an accident of circumstance, rather than a deliberate social barrier for the underclass, but the result is the same.
The people teaching voice recognition tech how to speak will tend to be affluent, educated, and often of a certain race and class, and their speech will reflect their intersectional status. Developing voice recognition tech isn’t a trivial matter. Tech behemoths like Google, Amazon, Apple, Microsoft, and Samsung are leading the pack, with a lot of money, market share, and prestige riding on their efforts. That leads to a selection bias for their R&D teams, with preference for engineers and software developers with more education and experience. As we’ll see, that level of education generally correlates with affluence and social privilege.
In any case, if an AI reads that (educated, aristocratic) way of speaking as “right” then different kinds of speech (say, that of a recent immigrant or an underprivileged youth from the inner city) will be “wrong.” It becomes a constant microaggression, to be constantly told that your way of speaking isn’t compatible with the software that your more affluent neighbors don’t seem to have trouble with.
The Racist Webcam
This one was a major scandal about 3 years ago. A laptop was released which advertised its face tracking software and an integrated webcam. In theory, it would pan, zoom, and focus on a user’s face, for applications like video chat, automatically. And the tech was pretty reliable, for a certain subset of the population. See, to detect a face, its algorithm looked for eye sockets. To avoid false positives, it looked for a certain ratio of light and darkness between the eye sockets and the more reflective skin over the cheekbones.
From the header, you can probably see where this is going. The webcam didn’t “see” people with darker skin, and wouldn’t track their faces. In effect, it only worked for light-skinned users. No one had intentionally designed it that way of course, but rather it was an emergent property of a broader system. There’s a good chance that the R&D team that developed the tech had a heavy racial imbalance. We’ll come back to that idea later.
The White Supremacist Chat Bot
Just last year, Microsoft released a Twitterbot named Tay. She was, as she described herself, totally innocent and ready to learn about human conversation. Through machine learning systems, she was able to integrate inputs into her own matrices to better emulate the conversation patterns of other humans. It was a wonderful experiment, but it went wrong almost immediately. In less than a day, poor Tay was spewing alt-right propaganda, racist invectives, and disturbing social commentary.
Now, that says a lot more about the kind of person who would deliberately ruin a social experiment for lulz than it does about Tay’s initial programming, but it does raise serious questions.
Sooner or later, we’re going to need to let artificial intelligences observe and use public data. The loudest voices are often the most volatile, and the least informed. How do you teach a machine that the most clearly avowed opinion is probably the wrong one, while still allowing it to pick up on common knowledge? Is there a critical threshold for opinion below or above which it’s probably safe, but at which it’s probably vitriolic propaganda? How do you teach critical thinking to a machine, without simply transposing your own opinions onto the system?
The Intolerant Digital Camera
It’s not surprising that our last example is another camera. The easiest way to tell two people apart is to look at them.
In this example, the camera had an assistive feature to help a user take good quality pictures, especially of groups of people. It would flash an alert if someone blinked from the camera flash, so the picture could be retaken. Hardly a necessary feature for a digital camera with a screen, but it was a popular gimmick, and it represented an early foray into practical applications for facial recognition technology.
However, like the webcam which looked for light contrast, this algorithm made its inferences based on the ratio between eye height and width. It reliably misidentified people with slanted eyes as having blinked, and called them out as such.
The internet made it into a tasteless parlor trick but that doesn’t invalidate the core problem. The algorithm was devised with the underlying assumption that eyes would look a certain way and the tech took on those presuppositions without any understanding of nuance and context.
The Haves and Have Nots
Each of these examples shares a common problem: the people designing the tech didn’t anticipate the experiences of people different from themselves, and so didn’t teach the tech how to handle those inputs.
This is another example of a sort of self-perpetuating disproportion. A hundred years ago, most educated people were male and white. The people with more education and more resources tended to be wealthier, and children from wealthier families had more options for networking, better schooling, and so on. That disproportion persists even today, with the tech and science industries skewing heavily towards certain ethnicities and demographics.
This disproportionate representation will take years to dissipate, but for today, even an awareness of its potential will help us to mitigate its damage.
The Future of Machine Learning and Digital Marketing
As digital marketers, our skills depend on the tech we use. We rely on analytics tools, social media platforms, HTML for emails and web design, and more. We depend on software that is even now being developed in part through machine learning (like the algorithms with which Google ranks different pages in SERPs). Those systems were designed by architects with their own biases and expectations.
It’s just as much our responsibility to use our tools thoughtfully as it should be the responsibility of developers and engineers to try to question and account for their own innate biases, presumptions, and prejudices when building the technology that will, in a very real sense, reshape the world.
Colibri Digital Marketing
Colibri Digital Marketing is San Francisco’s only B Corp-Certified digital marketing agency, focusing on the triple bottom line of people, planet, and profit. If you’re ready to work with a digital marketing agency you can be proud to partner with, drop us a line to schedule a free digital marketing strategy session!