Seeing through digital games of deception
This post is not about social media detox, although it’s worth mentioning that it can be an empowering experience. More than ever before, technology is so deeply ingrained in the way we live that it became seemingly inescapable. The main point is to actually know how it works, after all, knowledge is power.
Having said that, it’s interesting to hear from those who have documented their social media detox. They notice an overall improvement in their quality of life, realising the benefits of making a deliberate effort to control the amount of time they spend on social media.
This usually increases their well being, productivity and quality of their off-screen interpersonal relations. Celebrities tend to have a social media manager or agency looking after their accounts, but other people with a considerable following have switched off completely for a while, encouraging others to do the same.
Wherever we stand, it’s definitely worth looking into our own relationship with social media and come up with a way of dealing with it that is beneficial for us. But what matters most is to know how it actually works.
“Every time you click they learn more about you. Our information, private data and unknown habits are traded on for advertising space and dollars. The price we’re all paying is much higher than it appears. Whereas normally we’re the consumer buying a product, in this ever-changing digital world, we are the product.” Prince Harry (Guardian)
Tristan Harris, President and Co-founder of the Center for Humane Technology, goes further when testifying at a Congressional hearing in January 2020, entitled “Americans at Risk: Manipulation and Deception in the Digital Age.” You can watch it here.
The impact of artificial intelligence, automation and new digital technologies, if they remain unregulated, can be both economically and politically very dangerous.
“In world capitals, courtrooms and among the public, we are wrestling with what it means for tech giants to have enormous influence on our lives, elections, economy and minds.” Shira Ovide (New York Times)
Based on old emails and texts, this article by The New York Times also reports how Mark Zuckerberg suggested that buying competing apps is an effective way to take out competition.
It’s good to point out this doesn’t happen exclusively to Facebook and Instagram, in the fashion industry, for instance, most luxury brands belong to only two groups, Kering and LVMH, both based in France.
The chairman and chief executive of LVMH, the world's largest luxury-goods company, Bernard Arnault, features on the top 10 of Bloomberg’s Billionaires Index, but most of them are based in the United States and lead companies in the tech industry. The first three are well-known to all of us:
Jeff Bezzos (Amazon)
Mark Zuckerberg (Facebook)
Bill Gates (Microsoft)
Tech companies do have an unprecedented level of influence and power. Their development show no signs of slowing down, the pandemic has only exacerbated how tech is more and more part of our life.
“As lockdowns around the world closed offices and made working from home compulsory for vast sections of the working population, businesses and individuals grasped for a way to carry on at a distance. If it felt like everyone was suddenly using Zoom, that’s because they were: in April, Zoom peaked at over 300 million daily meeting participants – up from ten million in December 2019.” Victoria Turk (Wired)
Yes, it opens up new possibilities of flexible working, online learning and engagement with digital communities, but there’s the other side of it. Is it a coincidence that the cartoons, published by The New Yorker magazine a few days ago, echoed Derek Blasberg’s post?
It’s a bit like a city that never sleeps, people may end up working more and more hours without clear boundaries of time off and rest space, which are actually linked to increased productivity. There’s also a serious risk of burning out, which doesn’t help anybody in the long run.
Much more worrying is how emotional intelligence has been gaining momentum over the last few years. Surprisingly, there’s not much written about it outside specialist magazines, and not many people seem to be aware of this.
To be honest, I first heard about it on an online webinar by Wired magazine with Rana el Kaliouby, co-founder and CEO of Affectiva, an emotion measurement technology company that has developed software to recognise human emotions.
Much like Tristan Harris, the driving force behind Rana el Kaliouby’s work is to humanise technology, such as improving road safety.
But her company also sells a product to ”Optimize brand content and media spend by measuring consumer emotional responses to videos, ads and TV shows – unobtrusively and at scale.”
Unobtrusively is questionable. There are other similar companies focusing on emotional intelligence. Back in January 2016, The Wall Street Journal reported that Emotient had just been bought by Apple.
You can watch a video by Emotient’s CEO, Ken Denman, talking about how “Micro-expressions Reveal what You're Really Thinking”, published in December 2015.
Microsoft researcher Daniel McDuff is researching technologies that give machines the ability to accurately sense people’s emotions. The AI Blog by Microsoft refers this “could allow an intelligent assistant to do things such as recognise when a patient skipped medications and alert a caregiver.”
Microsoft staff is voluntarily participating in the research. ”As people opt in to the project – employees in building 99 can also install the system on their local computers and choose when to turn it on and off – we’ll begin to better understand how we impact each other", McDuff noted.
Facial recognition is a growing area in which major companies have been investing for a while. It offers such a wide spectrum of possibilities that it can be both exciting and daunting.
Positive effects mentioned above are definitely promising, but people could become even more vulnerable to manipulation, led to buy things they don’t need, or continuing to be fed with fake news to act in ways that are not in accord with their best judgement.
Or worst, what if this technology falls into the wrong hands? It could be definitely used as a social control mechanism, not only in the workplace but also politically.
Without wanting to sound alarm bells about the beginning of the Apocalypse, (after all I’m writing a blog post based on credible news sources and not a science fiction book), it’s undeniable that we are at a crucial moment in time to interfere with and regulate cyberspace.
It’s important to steer technology in a new direction, and only the tech giants themselves have the resources and means to protect people. I have no clue about how this new Renaissance could be done, but it’s clear technology needs to become more human.