Return to Blog
High Tech Housing Discrimination

High Tech Housing Discrimination

The landmark 1968 Fair Housing law that sought to ban housing discrimination has uncovered a modern threat: the rapid adoption of new technologies for selling and renting homes. Despite decades of progress, there is still much work to be done. As the NFHA noted in its 2019 Fair Housing Trends Report, new ways of advertising homes and apartments using AI and advertising that uses demographic microtargeting to zero in on a certain audience, threaten to continue discrimination of the past by modern means.

The home ownership rate for black Americans stood at 42.3 percent last year, just marginally better than 1970, when it was 41.6! Clearly there is a problem in the system. A report by the National Fair Housing Alliance (NFHA) last month found that housing discrimination cases were on the rise across the nation. Algorithms aren’t just impartial, unbiased systems that fairly sort through data. Rather, they tend to manifest the biases of their creators, and of that society at large.

For instance, when looking at tenant applications, an automated system may reject applicants based on unintended connections between data sets; living in a low-income neighborhood may be correlated with an inability to pay rent, for instance. And since modern algorithms compile and sort among myriad data sets, it can be hard for designers and programmers to understand exactly which data point may have caused the system to reject an applicant. Research from a team of Berkeley researchers released last month found that lenders using algorithms to generate decisions on loan pricing have discriminated against borrowers of color, resulting in a collective overcharge of $765 million each year for home and refinance loans. The analysis of roughly 7 million 30-year mortgages also found that both in-person and online lenders rejected a total of 1.3 million creditworthy applicants of color between 2008 and 2015.

Employing new methods like machine learning and artificial intelligence can make processes such as sorting through tenant applications faster, more efficient, and cheaper. The problem is that when you try to build an automated system that solves social problems, you end up creating something that looks at the data of the past and learns the sins of the past.

Targeting some, excluding others

One of the more high-profile examples of technology creating new types of housing discrimination arose from online advertising. Facebook has been cited numerous times by the ACLU and other advocacy groups for its microtargeting feature, which lets advertisers send ads to specific groups via a drop-down menu of categories, including age, race, marital status, and disability status. Real Estate professionals could purchase and publish ads on Facebook that discriminated against different racial groups and other categories protected by the Fair Housing Act. Facebook has since apologized and restricted targeting capabilities for housing ads. Earlier this month, as part of a settlement with the ACLU and other groups who had filed a lawsuit, Facebook said that housing, employment, and credit card ads can no longer be targeted based on age, gender, ZIP code, or multicultural affinity. The social network will also maintain a searchable ad library so civil rights groups and journalists could keep tabs on future housing advertisements.

Other tech giants, including Google and Twitter, have been investigated by the Department of Housing and Urban Development (HUD) for similar issues. The nature of these social network ads can also lead to unintentional targeting. For example, many of these systems allow for lookalike audience targeting, a feature that can for example, help a clothing company target consumers similar to those who already like or follow a brand. Carry that over to the housing world, and it could help a high-end apartment developer target potential renters who are similar to existing tenants—in effect concentrating on the same kinds of renters who already live in the building, and potentially excluding others.

Making Changes

Many advocates believe the answer to this unconscious bias is to change the way these new systems are designed in the first place. One step toward changing how these algorithms work could be by changing who designs them. Advocates within fair housing and technology need to educate programmers and others about how bias manifests itself in these systems, while also designing technology that includes discriminatory flares or bias signals: built-in checks that can evaluate how systems are performing and whether or not they may be creating biased outcomes.

Larger legal remedies may also be afoot. The House Financial Services committee has been looking into the issue and held a hearing in July, and some advocates have raised the idea of revamping the Communications Decency Act, which governs the behavior of tech firms and social networks, to create more specific rules around this type of bias and discrimination.

A big part of the solution should be keeping humans within the system. Housing can be so foundational to achievement, household wealth, and equality that some things shouldn’t be left to machines. The idea that math is better than humans may be true in some instances but not all. There’s a difference between mathematical fairness and social fairness. We should design for justice instead.


Return to Blog