AI in multifamily and hospitality property management
How AI is streamlining operations while raising new risks for residents and guests

The use of AI and the development of new tools are accelerating at a rapid rate—with different implications for different real estate sectors. To understand what the rise of AI means for the multifamily and hospitality sectors, JPM spoke with two experts:
- Sanil Paul: associate in investments at RLJ Lodging Trust, with more than a decade of U.S. and international experience in hospitality and real estate
- FNU Marsella: senior financial analyst at RBH Group, with a background in real estate development, operations, and investment
How is AI commonly used in multifamily and hospitality property management?

FNU Marsella
FNU Marsella: AI is becoming much more common in multifamily as operators look for ways to drive efficiency while still delivering a high-touch resident experience. I’ve seen it help teams work smarter and make the living experience smoother for residents.Â
In leasing, AI virtual assistants are handling more of the front-end communication, such as answering questions, booking unit tours, and guiding prospects through the early steps of renting. These tools operate 24/7, ensuring no missed leads and a faster response time. At Ascent Midtown in Atlanta, for instance, a virtual assistant named Redd helps prospective renters by scheduling tours, providing pricing details, and answering leasing questions in real time. Across Greystar communities, this type of AI support is also known to have helped drive a 112% increase in lead-to-tour conversion rates, according to Elise AI, demonstrating its significant impact in a competitive market.
On the operations side, predictive maintenance is gaining traction. Companies like AvalonBay are using AI to monitor building systems in real time. Instead of waiting for equipment to fail, they can act early, reducing emergency repairs and extending the life of their assets. Additionally, AvalonBay’s centralized service center has contributed to payroll cost reductions, with expenses expected to grow at approximately 1% for 2024, well below the average merit increase of 4%, according to recent data.Â
AI is also playing a growing role in rent pricing. With access to real-time data, property teams can adjust pricing more dynamically in response to market trends. It’s not as fluid as hotel rates, but it allows for quicker and more accurate adjustments than traditional manual processes.Â
Altogether, this adds up to time saved. AI takes care of many repetitive and time-sensitive tasks, giving onsite teams more space to focus on what only people can do—like building stronger relationships with residents and ultimately creating a great living experience.Â

Sanil Paul
Sanil Paul: On the guest-facing side, leading hotel brands have adopted AI-enabled chatbots and virtual assistants that handle routine inquiries and are increasingly capable of offering highly personalized responses. This has helped increase hotel staff productivity across the industry. From an operational standpoint, AI is commonly used in revenue management and energy optimization. Existing AI algorithms allow for real-time dynamic pricing and more accurate demand forecasting. AI systems analyze vast amounts of data and patterns, enabling hotels to make proactive rate adjustments that can more accurately predict market trends than simply reacting once those trends have materialized. This is critical for ensuring the hotel charges optimal room rates that maximize revenue potential in a highly competitive industry.
Also, many of our industry peers use AI-driven energy management systems, which adjust lighting and air conditioning based on occupancy, weather, and usage patterns. This generates substantial savings on energy costs while also satisfying industry-wide ESG goals.
What are the biggest risks AI poses to property managers, tenants, and guests in these sectors?
FNU Marsella: One of the biggest concerns in multifamily is fairness in how AI makes decisions, especially when it comes to screening and pricing. Many of these tools are trained on historical data that can reflect past inequalities. For example, screening algorithms might flag applicants based on ZIP codes or income levels that correlate with systemic disparities, even if the individual applicant is fully qualified. If not regularly audited, AI can reinforce patterns that disproportionately affect certain groups, potentially leading to serious compliance and reputational issues.Â
Data privacy is also a significant concern. In multifamily, AI systems are often working with deeply personal information—everything from credit history and employment data to behavioral insights from smart home tech. Because residents live in these spaces for longer periods of time, these systems accumulate a detailed, ongoing profile of how people live and interact with their homes. If that information is ever misused or breached, the consequences can be far-reaching, both legally and in terms of trust. While the type of data collected in hospitality may be more granular at a moment in time, multifamily systems hold a deep and ongoing record of residents’ lives. That makes proper data governance, security practices, and transparency about how data is used essential. Compared to hospitality, where AI missteps can lead to immediate brand damage or guest churn, in multifamily, the stakes are different. In multifamily, bias can lead to legal exposure under fair housing laws and long-term harm to a property’s reputation.Â
Sanil Paul: For hotel asset managers and their guests, the primary risks center on guest satisfaction and cybersecurity. Unlike other real estate sectors, hospitality still thrives on direct human interaction and is highly experience-driven. While AI-powered virtual assistants are increasingly sounding human-like, over-reliance on them can lead to customer dissatisfaction, as most guests still prefer person-to-person interaction over a virtual assistant. This can significantly impact brand reputation and loyalty.
The immense volume of guest and operational data processed by AI-based systems presents significant cybersecurity risks. A data breach could cause irreparable damage to brand trust, and hotels could incur substantial privacy claims.Â
Even though the overall risks of AI implementation in hospitality may be similar to those in other sectors, they may differ significantly in intensity and specific outcome. The sector’s expectation of highly personalized service means that any overusage of AI can severely compromise the guest experience and the core hospitality product. Also, hotel AI systems also have the capacity to collect granular personal data on guests’ routines, spending patterns, and conduct, making a potential breach far more invasive and damaging.
What can property managers do to use AI responsibly and safely?
Sanil Paul: AI systems are still evolving and prone to errors, so robust human oversight is essential. Hotel managers should clearly define guidelines and acceptable use cases for each AI system, ensuring strict alignment with brand values, statutory laws, and guest expectations. Managers must train their staff on how to leverage AI as a great productivity-enhancing tool while also ensuring sufficient human judgment wherever necessary. Over-reliance on AI for guest interactions could quickly erode brand reputation and loyalty.
Regular, independent audits of AI algorithms and their outputs are also crucial. This helps detect and mitigate biases or errors before they lead to flawed decision-making and costly lawsuits.
Managers should prioritize implementing robust cybersecurity measures. This includes thoroughly vetting third-party AI vendors regarding data access and sharing and entering into data
privacy and liability agreements with these vendors. As the full implications of AI are still unclear, managers should ensure that the vendors have sufficient liability insurance coverage.
Managers must also continuously invest in advanced cybersecurity software and regularly organize professional training for their staff on AI usage and data protection best practices. This is essential to prevent breaches and safeguard sensitive information.
FNU Marsella: Using AI responsibly starts with having a clear purpose. It should be aimed at solving real problems and supporting the team, not replacing the human element. It’s also important to be transparent about how AI is being used and what kind of data it relies on. If AI is making decisions that affect residents—like pricing, screening, or maintenance prioritization—there needs to be a human in the loop to evaluate and override when needed.Â
Similar to hospitality, regular audits are key, especially when it comes to fairness in screening or pricing decisions. That means going beyond the output and reviewing how the algorithms work, what data they’re trained on, and how that data is evolving over time. Protecting resident data is also critical. That includes having strong cybersecurity protocols, limiting access to sensitive information, and being clear with residents about what data is being collected and why. I believe AI should support property teams in making better decisions, not replace their judgment. When that balance is in place, it leads to more efficient operations and deeper trust with residents.Â
Are there any regulations or guidelines to govern AI use in these sectors? Is anything on the horizon?
FNU Marsella: Right now, most of the regulation around AI in multifamily [and hospitality] falls under broader data privacy laws. In the European Union (EU), the General Data Protection Regulation (GDPR) gives people the right to know what personal data is being collected and to request that it be deleted. In the U.S., California’s Consumer Privacy Act (CCPA) offers similar protections, such as allowing consumers to opt out of the sale of their data and request its removal. These frameworks set the standard for how personal information must be handled, and multifamily operators using AI systems must ensure full compliance, especially when dealing with resident data.
Beyond privacy, regulators are beginning to look more closely at how AI is used in housing decisions. Agencies like the U.S. Department of Housing and Urban Development (HUD) and the Federal Trade Commission (FTC) have raised concerns about fairness in screening and pricing, and some states are considering new legislation that would require companies to disclose when AI is used and ensure that a human review process is in place. I believe the direction is clear: AI accountability is on the horizon. Operators that build strong internal practices now around transparency, oversight, and fairness will be in a much better position as more formal rules take shape.
Sanil Paul: Along with the data privacy laws, anti-discrimination laws become highly relevant when AI is used in dynamic pricing due to the inherent potential for algorithmic bias. For instance, if an algorithm generates different pricing based on geography or race, these anti-discrimination laws could apply. Even the FTC can intervene if AI systems are found to mislead consumers or engage in unfair practices.
As AI represents a nascent but rapidly evolving sector, most AI-specific regulations are still in their early stages of development. California and New York are proposing legislation that would require establishments to disclose when AI is used and provide consumers the option to opt out of automated decisions generated by AI. Also, the EU AI Act, which was formally adopted in 2024 and is being implemented in phases, is one of the most stringent regulations that governs the usage of AI. This act mandates rigorous conformity assessments, human oversight, and strict data governance. The U.S. National Institute of Standards and Technology (NIST) has also developed an AI Risk Management Framework, a guidance document designed to help companies navigate AI adoption. It could shape future AI regulations in the U.S.
Issue: September/October 2025Â Â Volume 90 Issue 5
Similar Posts
Features
AI in multifamily and hospitality property management
How AI is streamlining operations while raising new risks for...