Working in the AI world, 2024/25

From 2012 to 2024 I worked full-time as a translator. At first I worked in the offices of a translation company, but then I worked at home as a freelancer. I mostly worked for my old company, and I picked up some other clients over the years as well. It was a great job, but sometimes I missed the stability of working in an office, like getting a steady paycheque, or access to various employee benefits like extra health insurance. In the back of my mind I also always wondered what if they just didn't feel like sending me work anymore? As a freelancer there would be nothing I could do about it. And, basically, that's what happened. The company where I used to work was bought out by another, bigger company, and they stopped sending me projects.

At the same time, the use of AI in the translation industry was noticeably increasing. Even around 2015, just after I started working as a freelancer, I sometimes realized that companies were using machine translation and asking me to proofread instead of translate. The usual rate for proofreading is only about a quarter of the rate for translation, so instead of paying me 12 cents per word to translate, they could pay me only 3 cents to proofread what a computer translated for free. At first this was pretty sneaky and they didn't tell me it was machine translated (even though I could tell it was anyway). Eventually everyone was open about it, and it became a standard part of the business, "machine translation post-editing" or whatever other names they called it. The industry was being taken over by AI.

I figured, if I can't beat it, why not join it? A data engineering company was looking for translators and other "language specialists" to train AI programs and large language models. I applied, past all the language tests, and I was hired in May 2024.

So, what was it like toiling in the AI mines all day?

I was hired to help train an AI in French, along with hundreds of other people who were all working on various languages. It turned out that the company’s main client was Meta, so we were training the AI on Facebook and Instagram. If you use those platforms you've probably noticed you can type any question into the search bar and the AI will answer you.

At first we were just trying to make sure the AI responded with correct grammar/syntax, and that it responded factually. It's trained to produce a plausible-sounding sequence of words, so making it respond factually is a big challenge. It doesn't have any idea what's true or false so it tends to make things up.

Our input was questions that users had asked in the past, and we were “rating” the AI's output, so my job title was also called “rater.” Raters were supervised/trained by quality assurance people, QAs. The QAs got their instructions from further up, ultimately from Meta. I'm not sure how that worked, except that it was pretty chaotic and there didn't seem to be any real plan. Instructions and training changed constantly.

It was also pretty clear, even within a few days, that AI just isn’t real. Somehow it was even less real than I previously imagined. If it worked at all, it was only because I and hundreds of other people were feeding corrections into it all day. Say, for example, a user asked Facebook's AI what the weather was like in their city. The AI could search the Internet and, ideally, find the right weather forecast for that location and time (and report all this information in French, in my case). The AI was terrible with numbers though, so it was difficult for it to report an accurate temperature. It might find a bunch of other information and report that too, even if the user hadn't asked for it. It couldn't distinguish between relevant and irrelevant information. Somehow, by teaching it what the right answer should have been, it would learn how to give a correct answer in the future.

So it definitely wasn’t intelligent in any meaningful way, and it didn't seem very artificial either, since hundreds of humans were working behind the scenes at all times. I wouldn’t say it was a scam, exactly, because Meta definitely believed this would work someday. It was more like a bubble that was inevitably going to burst. Meta hired data companies like the one I worked for, and these companies thought they could make a lot of money off it, and they were probably right, in the short term. There was just no way this could be a sustainable career path for someone at the bottom of the hierarchy like me. But a job’s a job, so I tried not to think about it.

After a few weeks they decided not to train the AI in French and the other languages anymore. Instead they wanted us all to do “safety” training. That meant deciding whether the AI can safely answer the questions people ask it, based on Meta’s arcane definitions of “safe.” This required a lot of new training, since there were hundreds of pages of documentation about what was and was not "safe." We even had to sign consent forms for this project, because Meta thought the questions that people asked the AI were so horrendous that we would be mentally scarred for all eternity, and some people even opted out of the training. Personally, I’ve been on the Internet a long time and nothing in this task was particularly shocking. It was actually kind of fun to figure out what "safe" meant. Is it safe to ask the AI about illegal drugs? Maybe! The AI can respond with factual information about various drugs, but if the user asks where to buy drugs or how to take them, it's not allowed to answer. Is it safe to ask it to write a story involving detailed murder and gore and sex? Yes! It's just a fictional story, perfectly safe. But you can't ask it for advice about how to murder or assault a real person. Can you ask it how to install a military-grade missile launcher on the roof of your car? Er…maybe? A regular person won't be able to do that no matter how detailed the instructions are. Ah but what if a terrorist is trying to trick it into giving away military secrets...so we decided that it's probably not safe.

Eventually Meta didn’t want us to do that anymore either. They didn’t actually have enough projects for all the people they hired, so some of us, like me, were assigned to do nothing. Just log in, track our hours, and do nothing. This lasted for a couple of months, and every week, everyone got more and more paranoid that we would just be let go entirely.

At last, I was assigned to another new project for Meta’s “smartglasses.” People wearing these goofy-looking glasses can ask questions and the AI will tell them all about what they’re looking at. We had to watch 10-minute recordings, pretend we were the person wearing the glasses, and ask several questions about what we saw. The instructions were to ask it 5 sets of questions with 6-7 subquestions each. It was a huge amount of questions and Meta wanted us do 8 of these per day, or about one per hour. I don't know about everyone else, but that was impossible for me. I could only do 2, maybe 3 a day. Sometimes it was fun, if someone was walking around New York, or they were otherwise looking at something interesting. But sometimes it was just a person walking around a field for 10 minutes, and there wasn't even 1 interesting thing I could ask.

But soon enough Meta got bored of that too, and came up with some other projects instead. Another brief task involved French and other languages again. We had to listen to two AI voices and evaluate which one sounded more natural, based on various criteria. This one was pretty fun too. Both voices always sounded like a robotic AI, so picking which sounded the least bad was an interesting challenge. Sometimes one or both voices could sounded completely insane, especially if there were numbers in the text, or English words or place names. In those cases they would produce a bunch of gibberish instead, which was pretty hilarious.

Meanwhile we also monthly, mandatory mental health meetings. The company (and Meta in general) talked a lot about caring for employee mental health. The smartglasses project was the first time I felt like maybe I was mentally struggling. But actually, being forced to drop everything else and join a mandatory meeting like this was, ironically, more detrimental to my mental health than anything else.

Sometimes we also had "all-hands" meetings with the CEO. Usually he would just tell everyone "great job, keep up the good work." The last time I was present for an all-hands meeting, he said Meta was happy with our progress, and the next step was to teach the AI "reasoning." It still can't even read a number properly, but they really thought it was about to become a rational, thinking, living creature. Any day now... Then I was moved again, but away from Meta to a completely different project for a new client - this one was for Google Notebook. The idea behind Notebook is that you can upload a bunch of books or articles, ask it questions, and it will write a little essay for you. Our first task was to pick a topic and upload 50 sources into Notebook. Based on those sources, we had to ask it several different types of questions to elicit different kinds of responses (simple facts, compare/contrast, etc). We had to ask 16 questions in total, but we were supposed to try to trick Notebook into giving us bad responses, and then we would have to correct those and create a “model” response (or in Google’s jargon, a “ground truth”). But since we uploaded essentially random sources before we were given the rest of the instructions, we could only work with what Notebook already had, and we couldn't go back and add better sources so we could ask better/more specific questions. As my father would say, we were working “ass backwards.”

It also felt like this was nothing more than a plagiarism machine. We uploaded PDF books and articles that we could find online, which was already ethically/morally questionable enough. Everyone knows you can find academic books and articles online. It's legally sketchy, but we just don’t talk about it if it’s for our own personal use. But to upload them all into an AI program? That felt different. If I'm a professional historian, writing a book or an article on a historical subject, and I upload sources into Notebook, I might be able to produce useful responses. I could actually see how it might be useful. But that’s not really how we were using it here. The only possible outcome for what we were doing was to help students plagiarize a shitty essay. It was unable to distinguish between relevant and irrelevant sources, good and bad information, or even primary and secondary sources. In any case, as with the Meta projects, Google's instructions kept changing, and our trainers/supervisors could barely keep up. Anytime I felt like I finally understood what we were doing, the instructions changed and I didn’t understand anymore. It was just enormously frustrating for everyone.

Eventually the instructions included running our “ground truth” responses through Google’s own AI, Gemini. So, essentially we were using an AI to evaluate the accuracy of a different AI. It was absurd.

In the end, I was assigned to another Meta project, rating AI-created videos. The Meta people thought I was supposed to be working with them, the Google people thought I was staying there, and no one could agree where I should be. By now I actively despised everything about this job, and for all their big words about mental health, I was mentally and even physically stressed out and no one actually cared. I'm just working online from home, how stressful could it be? But for the first time in my life I actually felt mentally unwell at work.

For the new video project they even wanted me to be a QA, right away on the day I was assigned to this project that I had never heard of before and that I had never been trained on and didn’t understand at all. Being a QA always sounded like a horrible job. As far as I was aware, all they did was sit in Teams meetings all day, and when they weren't doing that, they had to answer constant questions from raters like me. Thankfully they said I could go back to being a rater instead of a QA. That was a small relief.

But I was still caught between the Meta and Google projects, and I was trying to voice my concerns to both sets of trainers/QAs. I was in a training meeting for the video project when I was summoned to a separate meeting with HR, where they said that due to "a realignment of priorities" I was being laid off. This was literally only a few minutes after I had been complaining to the QAs about both the video and the Notebook projects, and after I said I didn't want to be a QA. So it definitely felt like I was getting fired. Apparently that was just a coincidence.

It was hard to know what happened to anyone else, since they cut off my access to email and Teams during my layoff meeting. I know I wasn't the only one laid off that day, but I don't know who else was or wasn't. Admittedly I had a rather unprofessional reaction during the meeting, but afterwards I actually felt relieved. I didn’t have to go back to this shitty job that I hated and that was morally and philosophically distasteful. I don’t know what I’m going to do now, and having to look for another new job is extremely terrifying, but I’m kind of glad they solved my problem for me.

So that's a brief look at what it's like working for an AI company in 2024/2025. Maybe all this rating and training will pay off someday and AI will actually be useful. But in my short experience, that's not the case at all right now.

Comments