2016 has been a hard year for many I know. Lots of us are still hanging on to what positivity we can, turning ennui and grief into action for themselves, for others, and against incoming attempts to install fascism and oppression around the globe.
2016 has also been hard for another reason: social media. No, I’m not talking about bloviated tweets from you-know-who. I’m talking about trying to be a decent person with a public presence on Twitter or Tumblr or Facebook or what-have-you.
These sites have, increasingly, been creating a bubble around what we read daily. No, I’m not talking about this echo chamber, not exactly. Articles like this one blame us for the problem. Instead, I’m talking about the bubble created around marketing data and “engagement” that has caused sites like Twitter to destroy their most useful feature — unfiltered and unfettered access to what people I follow say daily.
I was one of many people against Twitter’s algorithmic timeline, the odious feature that would create exactly the echo chambers the media is now decrying. Like with Facebook, we got here because our opinion doesn’t matter when weighed against businesses — the algorithmic timeline exists to funnel people into easily marketable segments that make data and ads from Facebook or Twitter easier to sell. It also serves to channel what’s most popular or best funded, even if that content is egregious bullshit.
Like many, when I started on Twitter, I used it as a feed to follow people I was interested in daily. This relationship quickly conflated friendship with readership, causing me to “follow” thousands of people while keeping a very short, private list of about 40 that I read daily.
While imperfect, this situation worked. It was easily digestible, and it let me share my art with hundreds of people daily, channeling my creativity in productive and beneficial ways. If you look back to 2014, for example, you see me substantially more positive and outgoing than I feel today.
The algorithmic timeline has, in many ways, provided the inverse. While its impact does not exist in a vacuum (many other changes occurred in 2015 and 2016 that made them especially difficult years for me), Twitter’s increasing focus on popular and paid content has come at a direct detriment to people that focus on building smaller, closer-knit communities. It’s also drowned out many starting, upcoming, and niche artists that rely on a diverse community to succeed.
Instead of keeping my relationship to my followers linear with time and with tweets, the algorithmic timeline has made this relationship hyperbolic. This means that the very successful or broadly reaching (celebrity tweets, memes, and people who focus on constant engagement to wide audiences of people) do exponentially better from Twitter’s new perspective. Conversely, this perspective has hurt people like me, who focus on smaller communities in which we feel we do the most good. If this sounds like a metaphor for similar arrangements in society, rest assured — it is.
The constant barrage of “Promoted Tweets”, “Moments”, “In case you missed it”, “Since you’ve been gone”, and “Show me the best tweets first” distracts from the most important feature Twitter once provided: reading what a small collection of friends are saying daily. And that piercingly hurts.
This arrangement has substantially amplified people who use the system to their benefit. For example, artists that use retweet campaigns and raffles (which implicitly apply operant conditioning to a wide audience) have become increasingly successful under Twitter’s model of more favorably valuing favorites and retweets in algorithmic feeds. Artists like myself, who choose to forego these techniques to provide a higher quality of signal on a tweet-by-tweet basis, generally do worse — despite our desires to continue to have a more authentic relationship with our audience.
If a hyperbolic model works for you — great! Keep doing what you do. For those whom it doesn’t work, please understand that authenticity isn’t dead, but it’s much less successful on Twitter than it used to be.
For those of us used to the unfiltered model, the result has been a slow gaslighting. Many people I know (myself included!) were easily convinced, by falling numbers and falling readership over time, that something we were doing was wrong — and that this was our own fault.
Many people who were once the most authentic and interesting voices in my timeline have, increasingly, been expressing feelings of isolation, remorse, burnout, and a lack of connection with people whom they once considered their friends. Through no fault of their own.
Increasingly concerning, many of my friends have, at different intervals, emotionally broken down. Between June and September of this year, I also broke down. I had been so reliant on Twitter as a personal emotional tether that I failed to realize when it had come undone, and worse, had become actively abusive in a hyperbolic model of conversation. It wasn’t just me.
Reading these vignettes from friends has been personally heartbreaking. So many of you are amazing, creative people that are lost in the noise through no fault of your own. Watching as increasingly loud, controversial, and popular content polarizes a once stratified and diverse community is incredibly painful.
It’s not just you. It’s the programming.
If you are like me and have been dealing with this specific form of anxiety, I strongly recommend investing in self-care and smaller communities to help you reconnect. Please remember that you are valuable.
In the vacuum of better social media options (competitors like mastodon.social seem potentially promising), I recommend Discord highly. Others have had success with Slack, Telegram, and traditional forum software. Use what works best for you.
Should you desire to chat, I can be found as Goldkin or GoldkinDrake on most services (and Goldkin#3497 on Discord). Please feel more than welcome to drop me a friend invite and say hello.
Today, the “Number Game” swept Twitter, Facebook, and other social media. The premise is simple: privately message an individual a number, and they will publicly post their thoughts about you to that number.
Secrecy, then, relies on two pieces of information. First, it requires that the recipient party not divulge the name of the party providing the pre-shared numeric key. Second, it relies on each key existing as a nonce, as any additional use of the same key is compromised to prior parties.
Unfortunately, as I discovered just by trawling my own timeline, many people failed to provide this second guarantee (affording false positives, such as 666). The higher the information entropy of the number selected and reused, the more likely it was selected by a single party.
This has several interesting ramifications. First, it means every identifiable key shared more than once is unmasked to all other recipients. Each recipient will know what the others have said about you.
Second, message passing for this “game” on Twitter is handled by direct message. Because this requires the recipient be following the sending party, there is a public record of a small pool of candidates for every key. By performing set intersection upon each reuse, each participant’s followers whittle the candidates down until there exists one (and only one) party who could have shared the original key. Add any metadata provided by the message text itself (such as, “this person’s art…”), and this whittles down even faster.
While this game may be “cute” and “fun”, I do not advise playing. It does not work as advertised. If you continue to do so, please be aware that you are publicly speaking about someone to your audience with dubious, trivially breakable secrecy.
The fact this is so noisy is a matter to address separately. And, accidentally presciently, I addressed my thoughts on this yesterday. I intend to filter these messages and continue on my merry way. I couldn’t ask for better data to test my new tools against, so collectively, thank you!
But, as many other people do not have this luxury: please be considerate to your audiences. Twitter is broadcasting these to everyone who follows you, many of whom desire timely, relevant information. Filling their channels with noise is not generally appreciated.
Now, if you’ll excuse me, I’m going to go try lucky number 8. 8 hours of sleep, that is.
As evidenced in blog posts predating this WordPress account, I have a keen interest in social media. Particularly, I would like to make it less of a noisy place and more usable “at a glance” than it exists today.
A recent post from tacit reminded me that, in some respects, this problem has been solved. Email messages are remarkably similar to posts on Twitter, Tumblr, and Facebook, which isn’t a huge surprise given their Usenet and BBS roots, which are themselves rooted in much earlier and older methods of human communication. Email messages are furthermore just as disposable, just as informative, and just as easy to abuse as the sites we now use as aggregators for our communication.
So, I wonder: why don’t we see more spam filtering in social media? For that matter: why aren’t we using this same class of machine learning algorithms to determine legitimate interest instead of “driving engagement?”
Is it because the gatekeepers have a vested interest in serving advertisements? Well, in some part, yes: by maximizing the amount of time people spend with the site, reading duplicate but engaging information, they get more opportunities to gather data and serve advertising. But, there’s a second side to this: pattern recognition is difficult to get right generically. For a simple example, try Google Voice transcription or Google Translate. They’re serviceable, but often, hilariously wrong.*
But, there’s nothing stopping client authors from writing their own classifier for each user. Imagine this for a moment: a theoretical Twitter client where each tweet gave you an “interested” and “uninterested” button. “Interested” includes more content similar to what you selected. “Uninterested” selects less, affecting future tweet selection as well. Both have configurable fall-off and the ability to randomly bypass the filter as desired, so even as one’s interests change over time, you’re guaranteed to not get a perfect echo chamber (unless you want one, of course).
And I’m not even thinking of anything fancy here. Under the hood is just a naive Bayes classifier trained on your selections. If you want to be fancy, consider using information entropy as a low-pass filter. Maybe you want to use your favorite machine learning algorithm instead? Sure, go for it.
My point is, of the gamut of social media clients available, very few bother to give us the tools necessary to manage information. I find this incredibly strange: either we’ve forgotten why they were relevant in the first place, or social media has forced us to recontextualize what is fundamentally the same problem as spam in email.
So, I intend to try this for my own purposes. Since I interact with the majority of short-form social media using IRC (using the truly wonderful IRSSI and Bitlbee), I’m going to write this into a plugin that color-codes incoming messages by how interesting they are. At any time, I can add the text to my classifier by just passing the message through it. All unclassified incoming messages push classification closer to neutral, while all “interested” and “uninterested” votes push them closer to bold or invisible (but never so invisible that mouse highlighting can’t unmask them).
And, we’ll see how that goes. If it works, I might advocate for other people to try it (or patch it into clients until more do).
* Conversely, this is why products like Dragon NaturallySpeaking are so good. They train on a set exclusive to the user, which is better at picking up deviations in patterns of speech, accents, etc. If Google or any of the other large organizations provided a training set per user, this could improve.