It’s no secret that I have a love-hate (mostly hate) relationship with social media. From its early beginnings, I was puzzled by the mania that characterized it; I didn’t understand what people were getting out of it that was so compelling. But I played along. Mostly, I used it to see the latest pictures of my kids and grandkids.
As most of you know, I’m a writer. I write books, articles, scripts, all kinds of things. A few years ago, I wrote and published a novel called “The Nation We Knew.” In it, I described an American president who took to task all of the fossilized and ineffective processes and procedures and institutions and practices that no longer work, and who then had the audacity to suggest new ways of doing things that actually got things done. The book, I’m happy to say, rose to the top of Amazon’s political fiction category and became an international best-seller for a short time. It still sells well. Its message is hopeful, purposeful, and realistic.
I want to reiterate that the book is a novel. That means fiction—you know, invented. Made up. Fiction. But about ten days after the book came out and was doing well, the Internet trolls got ahold of it, and began to trash it on social media. Whatever—criticism is part of the game. The positive reviews far outweighed the negative, but at one point the negative comments grew dark and threatening.
I tend to ignore reviews for the most part. Sure, I scan them occasionally to see if someone has found some boneheaded error on my part that needs to be fixed, but trying to read every review or comment is just not possible. But when the reviews got nasty and began to threaten me and my family, I drew a line in the sand.
I found it puzzling that a large percentage of the negative reviews began with variations of, “Well, I haven’t read it, but it’s clearly…” seriously? You haven’t read the book, but you get to post a review?
So, I contacted the organizations where the offensive and meaningless posts were appearing. I won’t give you their names, but I’ll give you hints. One of them is the largest online retailer on Earth; the other is the largest social media platform on Earth. Their response? Crickets. Despite numerous attempts on my part to get them to rein in the trolls, I got nothing. They didn’t even dignify my concerns with responses or suggestions.
Think about this for a moment: Book reviews are being posted on these platforms that begin with the phrase, “well, I haven’t read it yet…” Apparently, that’s okay in their minds. No one should have to read a book to review it.
The better the book sold, the more vitriolic the comments became, some of them noting where I lived. That was it. I pulled down all my social media accounts. I don’t miss them.
I also found it interesting that when I dumped social media (and I should say, I didn’t suspend it—I deleted it), I had thousands of so-called ‘friends’ on the platform. Most of them I couldn’t pick out of a lineup. But what I found intriguing was that no one—not a single person on that list of ‘friends’ —sent me a message asking if I was okay, given that I hadn’t been on Facebook in some time and my account was gone. Friends? Really? The people I consider to be my friends stay in touch, just like I stay in touch with them. We write (yep—letters), we meet for coffee, we talk on the phone, we Zoom, we send emails.
I just finished reading Jaron Lanier’s book, “Ten Arguments for Deleting Your Social Media Account Right Now.” For those of you who don’t know Jaron Lanier, take a few minutes to dive into his rabbit hole. Based in Berkeley, he created the original concept of Virtual Reality, and was one of the creators of the online world, Second Life. He’s a bit of a whackadoodle, but he’s also a serious technologist—he’s got major creds—and he’s a musician, a husband, a father, and an author.
I’m not a conspiracy theorist. I’m a realist. I do have a background in tech, because I spent 43 years in the telecom and IT industries. I know how this stuff works. That includes the algorithms that power social media platforms. Lanier understands them, as well. In fact, he gives a great example to illustrate just how powerful and potentially harmful they are.
Think about Wikipedia. We all use it. But when I was a kid, if I wanted to look something up, I grabbed the dictionary or encyclopedia off the shelf, and after I finished being distracted by the mylar overlays of the systems of the human body, or some weird animal that appeared on whatever random page I turned to, I looked up whatever it was that I wanted to understand. It gave me the same result, every time.
Wikipedia emulates this. If I look up Jaron Lanier on Wikipedia, I get the biographical entry that is typical of Wikipedia: brief summary, early life and education, personal life, in the media, creative works, awards, and so on. It doesn’t matter who looks him up, or where they are, or what they do, or what they believe—everybody gets the same information about Jaron Lanier, just like looking up a historical character in the encyclopedia. Yes, there might be a slight tweak here and there, but only to correct an error or expand the information presented.
But now imagine a different outcome. What if the results delivered to you by Wikipedia for a particular lookup—again, I’ll pick on Jaron Lanier—were different, depending on who you are. If someone is a technophile and artist and they look up Lanier, they get a long, glowing biography of a devoted family man who uses both sides of his brain equally well because he’s both a brilliant technologist and gifted musician (he is). But if the person executing the search is a technological luddite who believes that AI and virtual reality will bring on the destruction of human civilization, the result the person gets will portray a dreadlocked nutcase. In other words, in both scenarios, algorithmic processes analyze the person conducting the search, paying close attention to their search history, where they live, what they buy, how much money they make, what party they are affiliated with, who their online friends are, what they watch, what kinds of restaurants they visit, what music they listen to, and a thousand-thousand other things, all in a fraction of a second. And in that same fraction of a second, the application formulates a customized search result and places it carefully in front of the person, a result that reinforces their belief—their personal bias. The results for the technophile and the technophobe are completely different, yet they ask identical questions of the same system. And the result? A wedge is driven between people from each camp, and ‘what we believe’ becomes more important than ‘what we know.’ ‘What we know’ becomes the false narrative.
This is precisely how social media works. The deck is stacked against the user; it favors the house, in this case the House of Zuckerberg. I have always exhorted participants in my workshops or keynotes to listen carefully when I say, “Don’t be fooled: when the product is free, you are the product.”
Social media platforms destroy inclusive community and force us to behave in more of an exclusionary tribal fashion. These companies make money by personalizing the content they deliver, but not in a benevolent way: the content is designed to sell more targeted advertising, which causes users to spend more money. But the results are insidious, and some would argue (and I’m one of them) that they endanger the very tenets of democratic society. Every time we click on a post or advertisement, we help the platform companies to more narrowly target the ad content they put before us. The result is that what we see gets narrower and narrower, ultimately forcing us into ‘one-person tribes’ who fear the tribes around us. There’s a reason that sociologists have issued warnings about the impact that social media has on teenagers. At a time in their lives when socialization is of utmost importance, social media pits young people against each other, making them believe that they’re not smart enough, pretty enough, athletic enough, or popular enough to deserve self-worth. That’s not sad: that’s criminal.