I need your help. There’s a small chance that you may have heard about me, but even so you’ll have long forgotten. I had my 15 minutes of fame seven years ago when I chose to sacrifice my husband to save five strangers. My name is Betty Angel. Back then the press dubbed me the “Angel of Mercy”, but since then software ethicists have come to call me the Avenging Angel of Death. This is my story.
It was the early days of what eventually became known as “the trolley problem”, although back then we thought we were just dealing with terrorist attacks. One morning I was walking to work in my hometown of San Francisco. My husband Bob had left the house thirty minutes earlier to go to work, or so I thought. As I trudged up the hill, I noticed someone lying in the middle of the road. Concerned, I ran towards the person, quickly seeing that it was Bob and that he had been tied to the Powell-Mason trolley tracks.
“Betty, get me out of here,” he pleaded. But his bindings were steel coil, and without the key to the lock I couldn’t set him free. Worse yet I could hear a trolley coming, but it was around the corner and out of sight. Bob was going to die if I didn’t do something.
“They guy who left me here said that you need to throw that big red lever over there to save me by switching the trolley onto the other track,” Bob begged.
“Why the hell didn’t you say that first?” I yelled, running over to the lever. It was then that I saw the problem. The trolley was coming down the hill, fast. At the same time, I could see that five people, homeless street people from the looks of it, were tied to the second set of tracks. Throwing the lever would result in them being horribly killed.
I ran out in the middle of the tracks, waving frantically to the trolley driver. But the trolley kept coming. The driver could see me, and I could see him frantically trying to get the trolley under control, but to no avail. I didn’t know it at the time, but the trolley controls had been remotely disabled. I had to act.
Long story short, Bob didn’t make it. I’d like to say that I chose sacrifice Bob to save the five homeless guys, and on the surface of things I guess I did. But the truth of the matter is that I sacrificed Bob because I had been thinking of serving him with divorce papers for several months, so letting him come to this tragic end was just a quicky divorce in my mind. Bob never saw it coming. The divorce I mean. He definitely saw the trolley.
In the aftermath I was taken downtown and interviewed by a pair of detectives, I believe their names were Stone and Keller. They wanted to get my side of the story while it was still fresh. But about twenty minutes into the interview, a strange thing happened. A senior officer entered the room, spoke quietly with the interviewing detectives, and the three of them left the room. Almost immediately after that another gentleman, dressed immaculately in a three-piece suit, entered the room and sat down across from me.
He said, “Mrs. Angel, my condolences for your loss. My name is James Holmes. I’m a lawyer who represents a group of ethics researchers who would like to compensate you for your loss in return for your agreement to not hold them liable for your husband’s untimely death.”
I didn’t need to give this a lot of thought. I woke up that morning married to a man that I wanted to divorce, and just a few hours later he was completely out of my life and now I was being offered $500K to be ok with that? Hell yeah.
“I need to think your offer over,” I replied. No, I didn’t. “I just need to know why ethics researchers would feel responsible for killing my one true love. We were just about to start a family.” My bet was that $500K was his low-ball offer.
James looked at me with pity in his eyes. “Mrs. Angel, normally we wouldn’t share such details with the victim’s family. However, in your case, we can make an exception. Once you’ve accepted our offer and signed a non-disclosure agreement regarding this incident and everything that we reveal to you about this life-critical research, then we can bring you into the fold.”
Two days later I was received a payout of $850,000 – $500K was in fact an initial offer – and was filled in on the true story. These “ethics researchers” were working on something called the Trolley Problem, an experiment where people are given a no-win scenario where they need to decide who should be run over by a trolley. Research involving public transit in America? How esoteric and futile could you make it? Anyway, these dirtbags had been coming up with new combinations of people to kill – one stranger versus several strangers, a family member versus several strangers, a young person versus several senior citizens, and so on. The idea was to see what ethical trade-offs people make in an impossible situation.
For decades this research was conducted as a theoretical exercise, people were typically interviewed or given a survey to fill out to identify how they would react. The fear, particularly within the AI ethics community, was that this research wasn’t truly getting to the heart of the matter because everyone knew that nobody was really going to die. Then one day the law changed. Lobbyists, funded by billionaire tech bros in the highly competitive AI market, had convinced our duly-elected representatives that a few of us were expendable in the name of corporate profits. Just another Tuesday in America. The planets had perfectly aligned for this subset of “ethics researchers.”.
The next summer a rash of terrorist attacks involving trolleys took place, particularly in San Francisco. Attacks also occurred on university campuses where anonymous donors had funded trolley switch testing sites consisting of roughly four acres of land with intricate track layouts. You’ve surely heard of the MIT Massacre on the Labour Day long weekend when 17 people died in four separate trolley terrorist incidents. The mainstream media claimed it was Elbonian freedom fighters hoping to gain attention for their cause. Those of us who do their own research know that Elbonia is an imaginary nation made up by a right-wing cartoonist. What really happened was the MIT AI “ethics” guys had a successful string of life-action experiments that had financial backing from tech bros desperate to obtain real world data.
Bob was killed two weeks after the MIT Massacre.
Part of my deal with James Holmes was that I was brought in on the real story. I suspect they thought I was one of them simply because I was willing to make a quick buck for sacrificing my beloved husband. They were wrong. Sure, getting rid of Bob was the best thing that ever happened to me, but I could see that they would continue to kill people in the name of technical progress in AI. That didn’t sit right with me.
With my newfound wealth I decided to arm myself. I ran out to the local Target and bought an AR-15, a sniper rifle, combat shotgun, and several automatic handguns – all for home defence of course. God bless the second amendment and the Presidential order removing all gun control legislation nationwide in the wake of the MIT Massacre. I spent a weekend at a friend’s place in the desert getting familiar with my guns. I was good to go.
My first goal was to save as many people as I could from the ethics researchers, my second goal was to take out as many of those researchers as I could via high-projectile lead poisoning. These ethics researchers had declared war on the American people, so I declared war on them.
At first it was easy. I would get up at 2am every morning and drive along the tracks of the San Francisco trolley car system looking for newly installed levers. Whenever I found one I’d climb up onto the roof of a nearby house with my sniper rifle and wait for the ethicists to come by with their intended victims. Easy peasy, lemon squeezy.
One time I came upon a pair of researchers tying a young girl to the tracks. I dispatched them with one of my Glocks. The girl, couldn’t have been older than fifteen, led me to a panel van parked around the corner where I found five senior citizens in the back of it. I freed them too and called the police to come pick them up as I drove away.
After two months my prey realized something was going on and they went to ground. I moved on to dealing with the trolley switch testing sites set up at various university campuses around the country. These sites proved to be even easier to deal with. For each site it was easy enough to do a quick Internet search to find out who was doing trolley problem ethics research, where their campus offices were, and what they looked like. In many cases I just showed up during their published office hours, walked in, and popped them. In addition to disposing of the researchers I also took out the testing sites using explosives. With the eradication of gun control, it had become legal to purchase C4 if you promised it would only be used for home protection, and in most states you could buy it at your local convenience store.
I did run into a problem on the East Coast, not surprisingly at MIT’s trolley switch testing site. I was in the process of setting explosives when I noticed movement at the other end of the test site. It was one of those robotic dogs I’d seen on YouTube. Worse yet, it was one of the ones built for the US military with a machine gun attached to its back. This was bad. Luckily it hadn’t seen me yet, so I immediately withdrew and returned to the motel where I was staying.
At the motel I had my laptop, toothbrush, and weapons cache. I travelled light, every few days I simply bought a new set of clothes to wear to avoid wasting time cleaning them. I liked to remain focused on my mission and thereby keep victory within reach.
After several hours of Internet research, I found a blog describing how to deal with this type of robot. The recommendation was to buy several Furbies (remember them?), teach them to say “squirrel!”, and to scatter them around where you believed the robotic dog would patrol. That day I picked up several Furbies, once again at Target, spent a few hours setting them up, and attached some C4 with a remote detonator to each. At dusk I went back to the site, turned one of the Furbies on, and tossed it over the fence. Twenty minutes later the robot came through on patrol, picked up the Furbie in its mouth, and I detonated the bomb remotely. Easy peasy, lemon squeezy.
As an aside, Iran used the same strategy when it successfully pushed back the American invasion in January 2027. If you remember, that was the first large-scale autonomous robot attack force, and it was thwarted using $20,000 worth of children’s toys.
Within a few months all live-action trolley problem research within the United States stopped because of my reign of vengeance. At one point I caught wind of a trolley testing site being built by the University of Toronto up in Canada. That was problematic because they had border checks specifically looking for weaponry, so I couldn’t bring my gear up with me. They also had effective gun control in place so I couldn’t simply pop out to a 7-11 for C4. Goddam Canucks and their rule of law. But then I remembered they still had actual journalists up there. Remember journalists? It was simple enough to call the local TV station and rat out the ethics researchers. A few days later the RCMP, the Canadian version of the FBI (remember those guys?) shut the ethics dirtbags down and forced them to apologize. Canadians love saying sorry.
That was when everything got harder. The tech bros had ramped up their AI ethics research efforts by pivoting to funding live crosswalk research instead. The crosswalk problem is like the trolley problem. In this case, there are two groups of people in a cross walk and you’re driving a car that can’t stop so you need to choose who you run over. This version of the problem is deemed important by artificial intelligence (AI) ethics researchers as it informs them on how they should train self-driving cars to decide whom to terminate when given the choice. Or something like that, who knows what the tech bros are really up to?
During the summer of 2027 there was a rash of automobile related terrorist attacks around the country. Individuals, couples, young families, children, people pushing strollers, small groups of senior citizens, large groups of homeless people, all being mysteriously run down. And always within a few blocks of an AI research lab, unsurprisingly.
This presented me with a problem. My first thought was to simply stake out cross walks near AI labs. But there were too many labs, and every lab had too many crosswalks within reasonable walking distance. It was a numbers game that I couldn’t win.
Then it struck me. I could use AI to fight AI. I turned on my laptop, brought up my secure web browser with DeepSeek R4 built into it, and asked it where and when would the most likely crosswalk terror attack occur near me. After a few iterations of tweaking the prompt I got a likely answer: tomorrow afternoon at an intersection in Menlo Park.
I drove down the next morning and parked my car in the storage parking lot on the corner at the intersection and waited for my prey to appear. Around noon a gaggle of nerds walked over from a corporate campus on the other side of the street and started offering money to local homeless people. It was clear what they were about to do, so I got out of my car, calmly walked over to the nerds, and helped them shuffle off their mortal coils. It was a good start.
For the last five years I’ve effectively been playing whack-a-mole. Tech bros would fund live-action AI ethics researchers, I would proceed to eliminate them, then we’d rinse and repeat. Five years of writing prompts, setting up stakeouts, and popping anyone who looked like they were forcing two groups of people to cross the street at an inopportune time. Did I make mistakes? Probably. But at least I put an end to the AI ethics research reign of terror in North America.
Why have I told you all of this, making this recording while sitting here in a non-descript restaurant in Chinatown? Because I need your help. See that table over there with four people who arguing about how to share two sets of utensils between them? They may appear to be four philosphers out for dinner, but trust me, they aren’t. I can’t do this alone, there are simply too many restaurants and too many University philosophy departments. I need your help to stop this new round of live-action research before too many innocent lives are lost. Will you join the fight?
1 Comment
Larry O’Brien
Good ending. It’s already long but I expected an arms race where the tech bros train “what will Betty Angel’s model predict”.