Home Artificial Intelligence AI Astrology Is Getting a Little Too Personal

AI Astrology Is Getting a Little Too Personal

The first thing I ask Banu Guler, the founder of the astrology app Co–Star, is whether she can read my chart. We swap phones to look at each other’s profile. After we put our devices aside, she scrawls my astrological chart from memory into her notebook, a circle bisected by various lines like an erratically cut pie. It’s not looking good. There’s a 90-degree square between my sun and my Mars, which is, she lowers her voice and chuckles, “rough.” Apparently, it’s the shape that represents “sad and temporary.”

Since its launch in 2017, Co–Star has contributed to a resurgence of Western astrology. The company claims that it’s home to 30 million registered accounts; a third-party analysis from data.ai shows that nearly 800,000 people use the app in a given month. Co–Star offers daily predictions about your life, arbitrary “Do” and “Don’t” lists that dictate how you should go about your day, and charts that tell you how compatible you are with your friends. Its language fluctuates between direct and vague, much of it coated in a candy shell of snark.

Earlier this year, the company introduced an “Ask the stars” chatbot that is best described as a modern Magic 8 Ball: You pay a small fee to type in questions about your life and receive direct answers in response, courtesy of artificial intelligence. (After a few free questions, $2.99 gets you five more.) “Welcome to the void,” the chatbot beckons when you pull the feature up. I asked it whether I’d met my soulmate. No, it told me. Are my friend and I drifting apart? Yes, it responded, giving shape to a nascent suspicion. What should I have for breakfast? Oatmeal. With the Moon opposite your natal Neptune, you are experiencing a conflict between your emotions and a desire for clarity. Eggs simply wouldn’t do.

As with any artificially intelligent program, Co–Star makes decisions according to its training data. In this case, the app claims that “every answer is based on Co–Star’s proprietary database of astrological interpretations, custom built over five years by poets, astrologers, and technologists.” It’s also, of course, informed by my personal astrological chart: my birthday, birth time, birth location. The answers can be odd, and they’re a far cry from having Banu Guler (or any other human) read your chart live, but they do feel personalized. The language is humanlike—it relies on models created by OpenAI, the same company behind ChatGPT—and the citation of both my astrological chart and NASA data lend the responses a peculiar authority. I can’t lie: They’re compelling.

This level of personalization is distinct—and unsettling. Astrology has long benefited from what is known as the Barnum effect, the tendency for people to believe that generic descriptions apply specifically to them. (Just think about horoscopes printed in a newspaper.) Co–Star’s adoption of generative AI allows the app to assert, more than before, that its advice is targeted directly at you as it tells you to find a new therapist or reveals that you have psychic abilities. The app takes on “a larger role than most divination experts would take,” Beth Singler, a digital-religions professor at the University of Zurich, says. When it directed me to take a break from my partner, Singler told me that “I can’t think of any [divination leaders] I’ve ever encountered who would give such a definitive answer.”

According to Guler, Co–Star has employed AI since the company was founded, when a more rudimentary technology spliced daily readings together from a pre-written text database. (She told me that Co-Star has been working with OpenAI for several years, though would not elaborate on the nature of the relationship.) Still, the arrival of “Ask the stars” is a prism into the complex ways that new advances in generative AI can seep into people’s spiritual and moral lives, even through the most mundane decisions. Although much has been said of the technology’s practical effects—whether it will come for our jobs or redefine warfare—it also might influence us in ways that are more intimate and much harder to quantify.

For many people, that influence should be easy enough to avoid. This is, after all, astrology. Not everyone occupies themselves with divination, and even among those who enjoy Western astrology, many don’t take it seriously. (For me, it’s a fun way to bond with friends, but I lost no sleep over Guler’s analysis of my life.) Some people do take it that seriously, though. In 2018, a Pew Research survey found that more than a quarter of Americans believe that the position of the stars and planets has power over people’s lives; the use of other systems of astrology in certain cultures to guide major life decisions is also far from new. Just as pertinently, AI has been working its way into a variety of spiritual practices. Religious leaders have written sermons with ChatGPT; AI avatars led a Mass in Germany.

Inviting AI into the more private, personal domains of our lives comes with its own set of risks. One might think people would be less trustful of advice that comes from a machine, but as Kathleen Creel, a professor who studies both religion and computer science at Northeastern University, explained to me, spirituality’s extremely subjective nature can make AI’s shortcomings and mistakes harder to identify. If an AI-powered search engine tells you, say, that no country in Africa has a name that begins with the letter K, its powers are instantly dubious. But imagine an AI chatbot that’s trained on your own preferences and habits telling you that exercising in the morning will set you up for success. Things are murkier if that success never arrives. Maybe you just need to wait longer. Maybe the problem is you.

Whenever people perceive AI as better, faster, and more efficient than humans, “our assumption of its superiority places it up in this godlike space,” Singler said. That assumption, she cautioned, “obscures all the humans in the machine.” AI chatbots summon clear, definite answers as if by magic, with little indication that the technology itself is made up of our own convictions, flaws, and biases fed into algorithms. Clear, definite answers have an obvious appeal, especially when the world feels unpredictable; over the first year of the pandemic, for instance, searches for birth chart and astrology reached a five-year high worldwide. In times of crisis, one has to wonder how willing some people might be to look to chatbots like Co–Star’s for guidance—to outsource decision making, however big or small.

I asked Guler, over drinks near Co–Star’s headquarters in Manhattan, if she worried about the risk of a growing dependence on AI for life advice. Her answers were a bit like reading Co–Star itself, vague and specific in turn. She explained that the company doesn’t permit users to have ongoing conversations with the “Ask the stars” bot, unlike a number of other AI chatbots. The bot resets after each question, no follow-ups allowed, to try to prevent people from falling too far down the rabbit hole. Co–Star staffers also look at the percentage of people who screenshot particular types of answers and whether a user repeatedly asks versions of the same question, Guler told me, though she evaded the question of what they do with the information. Co–Star further claims that the chatbot rejects 20 percent of questions because of “potential risks”—queries about self-harm, for example.

Beyond safeguards built into Co–Star’s operation, Guler attempted a grander defense—one that, frankly, seemed nonsensical. She argued that the quality of the astrology delivered by the AI should, in and of itself, be a protection against overdependence. “The aspiration is that when Co–Star content actually hits, which is how we call it internally, it slaps you. You pause and, like, you can’t continue consuming,” she said. “Like, nobody’s addicted to Tolstoy.” She seemed to pick up on my skepticism. “The question isn’t how do we prevent dependency, which I think is a solvable but not terribly interesting question,” she continued, “but more like how do we make every sentence hit? Like, really hit?” I nodded while she took a pull from her vape.

As Guler and I parted, I thought of the people I had seen lined up at a large metal box the size of a vending machine that Co–Star had been using to market its new feature. The machine was installed in a magazine shop in Manhattan over the summer and has since taken up residence in Los Angeles, running the same software as “Ask the stars” but with preprogrammed questions. On the Wednesday evening that I stopped by to see it, 10 people were lined up—a tight fit for the back corner of a bodega. After querying the machine, I looped back to the end of the line in the hopes of waiting out the crowd so that I could have it all to myself.

I asked the man behind me whether he wanted to go first. He explained that he’d just gone but had asked the wrong question and wanted to try again; he was curious about whether he’d be able to get a new job in the same industry, or if he should try a new career entirely. I noticed the gentle wringing of his hands and decided to give him his space. As I walked out of the store, I looked back at him, a lone figure plugging questions into the void.

 

Reference

Denial of responsibility! TechCodex is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
DMCA compliant image

Leave a Comment