Finding an apartment to rent is already a stressful process in many cities, so imagine not being able to communicate with a human to help you when you sign a lease. Until now, we’ve been used to dealing with bots behind the phone to solve more “trivial” problems. Some places in the U.S., however, have taken a step closer to a world without human contact: They're creating landlords who are actually artificial intelligence chatbots.
My landlord is an AI. As The New York Times reported a few days ago, some property managers reportedly use artificial intelligence chatbots to handle tenant inquiries (complaints) and even potential questions. In theory, the move is a triumph for management: No more complaints about how long it takes to resolve a problem and 24/7 staff.
However, the implications of this movement go far beyond simply streamlining procedures between housing communities and neighbors. We’re talking about AI chatbots that don’t introduce themselves as such and even have different “personalities” to deal with costumers, depending on the situation or person. There are even bots with threatening tones.
Jason Busboom’s bots. This is the name of the man who manages the properties of a large apartment complex in Dallas, Texas, which has up to 814 units. Until recently, he had eight people on his team to manage the complex’s problems. Now there’s Mat, who’s friendly and helpful. He’s an AI bot that does everything from sending text messages to handling inquiries and scheduling tenant appointments.
The team also has Lisa, who’s very professional and informative. She’s a bot dedicated exclusively to rentals and answers questions from prospective tenants. Finally, there’s Hunter, the strictest of the three algorithms, a bot that follows up on non-paying tenants and reminds them to pay.
Millions of dollars in savings. As in many sectors where AI is becoming increasingly important, there’s a clear economic motivation behind AI landlords. The fact that an algorithm is the only way to deal with a rental problem translates to savings and time. According to a report published in 2023 by the McKinsey Global Institute, AI chatbots generates up to $110 billion for the real estate sector.
Who’s behind these bots? Companies like New York-based EliseAI provide these chatbots. According to the Times, the company serves owners of about 2.5 million apartments in the U.S. CEO Minna Song says that in addition to being available via chat, text, and email, the bots can interact with tenants through voice messages and even have different accents, all a la carte (for owners/landlords/managers, of course).
Song says these chatbots can help with anything related to home maintenance, from monitoring and troubleshooting via chat to sending videos to tenants showing them how to fix a water leak while they wait for a plumber.
Home bot ethics. There are many concerns about these chatbots. A tenant might feel insulted if the AI doesn’t identify itself as such and thinks they're talking to a human. EliseAI’s current capabilities are so good that Song says that some tenants have approached the leasing office to “ask for Lisa." They even leave gift cards for the chatbot or even send it messages asking to go out for coffee.
But software programmer Ray Weng told the Times AI chatbots made his apartment search even more difficult. He had to talk to an AI several times about renting an apartment, receiving vague and repetitive answers. Visiting the properties was no better: even the tours were self-guided.
When algorithms rule us all. AI isn’t only coming to complex terrains like employment. Algorithms also play an important role in the U.S. housing market through algorithmic price-fixing. This term can be applied to many other sectors, from the airline industry to the meat industry production and various online businesses. It's also something that happens around the world.
As we said at the beginning, AI has also spread to citizen requests at all levels. It’s no longer limited to just private companies. Government agencies and official institutions that have also adopted the technology so that an algorithm can solve their problems.
What about liability? Having an AI landlord in the U.S. seems like a logical consequence of all this dystopia, but it also raises other questions. What happens if it makes a mistake? Who is responsible? And what if the chatbot tells or gives a tenant false information or makes a promise that human landlords cannot keep?
The approach isn’t new. We’ve already seen what happened this year with Air Canada: It had to compensate a customer who bought a ticket after its AI chatbot lied about the airline’s bereavement policy (it said he would get a refund).
Perhaps we should start by requiring chatbots to be present themselves as such. “People might see deception as disrespectful,” Alex John London, a professor of ethics and computer technologies at Carnegie Mellon University, told the Times. Meanwhile, as chatbots continue exhibiting biases and errors, we’ve found a “perfect” solution: monitoring them.
Images | Elias Rovielo | Xataka On with Bing Image Creator | NCinDC | Josefine S
View 0 comments