Skip to content

Safety in the AI era

Reducing risk and creating secure workplace cultures 

By Journal of Property Management
iStock-1365412504

From chatbots to lease screening to maintenance tools, artificial intelligence (AI) is rapidly gaining a foothold in property management.

According to market researcher Gitnux:

  • 45% of property management firms have adopted AI tools to improve tenant screening processes.
  • 55% of property management firms use AI for predictive maintenance.
  • 65% of property management firms have implemented AI for fraud detection.
  • Looking ahead, 72% of property managers view AI as essential for getting a competitive advantage.

For many, adoption has been swift, but as usage increases, so do risks. With any new technology comes caution and complexity, but AI’s threats are especially unique, with red flags raised around concerns such as data bias, privacy, cybersecurity, and overreliance. Despite these concerns, experts say internal education, governance, and ongoing human oversight will go a long way in protecting property managers, owners, and tenants.

Risky AI business

Anne Hollander

To understand the risks of AI, it’s important first to understand what AI is—and what it isn’t.

Anne Hollander, founder of The Strategic Edge, an advisory firm specializing in AI strategy, says it’s crucial to distinguish between traditional software (like Excel and QuickBooks) and AI.

Traditional software follows written instructions, so it behaves the same way every time, making it predictable. “We understand how it works, we can see the code, and we can trace errors,” says Hollander. “AI is different, and we don’t fully know how it works. It’s driven by data, fancy math, and probability. It can give a different response tomorrow than it does today, and it doesn’t leave a clear audit trail. AI mimics or parrots back data it’s been given, so if that data isn’t accurate or is too limited, it can create problems. Garbage in, garbage out.”

Bias

Tracey Hawkins

One of the most significant risks when feeding flawed data into AI is the potential for bias. Tracey Hawkins, a safety instructor and generative AI cybersecurity expert, breaks this bias into two key areas: 

  • Safety bias 
  • Bias that can lead to Fair Housing violations

“As for safety bias, most AI tools have been trained on white male American English voices,” Hawkins says. That means AI voice attendants may struggle to understand women, people with accents, or older adults, which could pose potential problems in service or during emergencies.

Bias also emerges in data-driven tools, such as lease screening systems. If the AI was trained on limited or skewed datasets—such as those from high-income or homogeneous communities—it may replicate those patterns in ways that create Fair Housing compliance risks. 

“A good example is screening applicants,” says Christa Leary, principal and co-founder of Maitri Partners Consulting. “If you feed AI a data set from a student property, it’s going to assume most applicants are young and have limited credit. It learns from patterns in that environment. In this example, it may assume that all applicants need cosigners. But if you apply that same model to a senior community or urban property, the applicant profiles are completely different.” The model’s assumptions won’t translate—and that’s when bad decisions and legal exposure can occur.

Data security

Another critical area of risk is data privacy. Hawkins warns that many people don’t realize how exposed their data becomes once it’s entered into AI systems.

“If you’re including client data—any personal, financial, or proprietary information—it can become public,” she says. “There’s no such thing as privacy. Redact anything sensitive before it ever touches an AI tool.”

Christa Leary

Leary emphasizes that many companies lack clear, proactive internal privacy policies. Many organizations haven’t set guidelines around what data is collected, where it is stored, who has access to it, and what happens in the event of a breach.  

“These are the basics, but in this fast-moving space, it still feels like the Wild West,” Leary says. “There are so many tools popping up, and not all of them were built with real estate-specific risks in mind.”

Brain drain

As AI tools become more capable, the risk of overreliance is growing—a concern Hollander refers to as “brain drain.”

“You start by not knowing what to do with AI, then it becomes part of your daily routine—until one day you realize you haven’t written your own email in weeks or even months,” she says. “That’s when we know we’ve reached a point of overreliance and need to back off a bit.” 

Read more about AI in multifamily and hospitality here.
That overuse can lead to the erosion of core skills, such as communication and judgment. Findings from a recent MIT study back this up. In the study, researchers broke participants into three groups and asked them to write SAT essays using ChatGPT, Google search, or nothing at all. Using EEG to monitor brain activity, the researchers saw that the ChatGPT users had the weakest brain activity, observing that they “consistently underperformed at neural, linguistic, and behavioral levels.”

The antidote to this “brain drain,” according to Hollander, is a balanced education strategy: Train people to use AI effectively, but also guide them on when not to use it.

“Brain drain happens when people start losing critical thinking and soft skills,” she explains. “That’s why education and enablement need to be two-sided: Use AI to its maximum potential, but build human skills alongside it. That’s our differentiator in the AI space—humans know people.”

A look back at using AI
One year ago, JPM published “An experiment with generative AI,” looking at how accurate and useful tools like ChatGPT and Copilot were for property management purposes. The conclusion: “Only human input can identify the specific issues and details necessary for real insight into a topic. This is especially true for a complex industry that constantly changes, like property management.”

Vendor considerations

To mitigate AI risks, one of the simplest and most effective steps a property manager can take is to thoroughly vet their AI vendors. Asking the right questions can reveal red flags or reassure that the tool is safe and compliant.

Whether working with your current vendor or vetting a new one, there are helpful questions to ask, including: 

  • Do they test for and mitigate bias?
  • Can they provide documentation of their training data?
  • Is there human review built into AI decision-making?
  • Who owns the output?
  • How does the tool protect sensitive data?
  • Is there an ethics policy or governance framework?

“We want to understand how the model is built, and when we don’t get answers or encounter defensiveness from vendors, that’s a red flag,” Hollander says. “We’re not looking only for a tool; we want a partner in this.” 

She adds that it’s a green flag if a vendor has a publicly available governance program, so you can review how they think about AI policy and governance within their own build, development, and monitoring cycles. “Even better if they can share their audit logs or the monitoring process they go through on a regular basis,” she says.

Building a safe AI environment

Whether an organization is going all-in on AI or still testing the waters, Hollander advises a thoughtful approach. She encourages organizations to take time to understand the risks, move slowly, and invest in strong education and guidance around AI use.

Since many organizations already use AI tools, Hawkins recommends starting with a risk assessment.

“Find out who’s using AI, what tools they’re using, and how,” she says. “Are they using passwords instead of passphrases or passkeys? Have they set up two-factor authentication? Is everyone installing security updates regularly? With AI, all of that becomes even more important.”

In her experience working with organizations, these assessments often open the door to stronger training, internal policies, and oversight.

“Establish safety and governance,” Hawkins says. “Make sure your team knows when and how to use these tools, and what data is off-limits. If not, you’re inviting risk.”

To navigate the changes AI introduces, Leary recommends hiring or designating a certified change management practitioner—someone who can lead internal adaptation with clarity and neutrality.

“This is a massive shift,” she says. “You need someone internally who’s unbiased, certified, and has the horsepower to guide those changes, because roles will shift, job descriptions will change, and human resources and operations will have to adapt. This isn’t just a software rollout—it’s ongoing, and it touches every part of the business.”

Leary notes that internal training has become a necessity that many companies can no longer afford to skip, and the education needs to reach every person in the company. “You have people at very different knowledge levels when it comes to AI,” she says. “But this isn’t a one-and-done. AI training must be ongoing, and it has to stay up-to-date.”

Some companies are even forming AI steering committees before launching any new tools, and in some cases, creating their own AI agents, thereby keeping data in-house. Leary says that by controlling and owning their data, they can minimize risk related to access to data, implement Fair Housing rules, and avoid vendor breaches, among other benefits. 

A future of safety

Hawkins says she hopes all of the education instills a sense of preparedness, not fear. “The companies that will thrive are the ones putting safety and governance in place before something goes wrong,” she says. “That means training your team, understanding your tools, and treating AI with the same diligence you’d expect from any risk-sensitive area of your business.”

While AI will become more embedded in daily operations, human oversight must remain central. As Hollander, Hawkins, and Leary all emphasize, AI can augment—but not replace—the critical thinking, ethical judgment, and people skills that only humans can bring.

“Remember the world before the internet in the office? Or life before smartphones? AI will become just as essential,” Hollander says. “We’ve hit the limit of what traditional productivity tools can do. AI is that next leap forward—and the good news for humans is, we get to be human again. It can take some of the burden off our plates, freeing us to focus on the work that matters.”

Journal of Property Management

Similar Posts

Navigating NSPIRE

How property managers can master HUD's new inspection standards

Safety in the AI era

Reducing risk and creating secure workplace cultures 

AI in multifamily and hospitality property management

How AI is streamlining operations while raising new risks for...