An introduction to AI
It may be surprising to learn that the modern era of AI began in the 1950s with Alan Turing.
AI is a broad term, which covers machines, robots and software systems that perform complex tasks independently without being given instructions for each individual step. There are different kinds of AI including, but not limited to, Machine Learning, Deep Learning and Generative AI (see An Introduction to AI).
The AI that has most recently entered the public discourse, is predominantly Generative AI (GenAI). GenAI refers to AI-powered tools that take user inputs (usually text) and output some form of media (such as text, image or audio). There are a wide range of different tools that use different inputs to create different outputs. OpenAI’s ChatGPT, Google’s Gemini and Anthropic’s Claude are examples of text-to-text AI tools (meaning existing text is found and reproduced as new text). Text-to-image generators also exist, such as Stable Diffusion and Adobe’s Firefly.
We are exposed to AI all the time, without necessarily realising it. Type a sentence into Word and it suggests the next word to use. Ask Google a question and the top answer provided is an AI generated overview. These are examples of GenAI that are starting to become a part of everyday life, although in general they are still largely an opt-in or opt-out technology.
Use of GenAI in the homelessness sector
Some homelessness organisations have started to embrace the opportunities of GenAI tools. London-based charity, Providence Row encourages staff to use ChatGPT for general tasks; writing emails, project planning, developing job descriptions and interview questions. They describe using this technology being like having an assistant. Neal McArdle, Head of Learning and Training Services at Providence Row, thinks that AI could have a huge impact on the workforce, reducing time spent on admin and in turn helping to reduce rates of burnout and turnover.
The homelessness tech organisation Beam has developed the Magic Notes product, where social workers can record their sessions, a Magic Note is emailed over and the notes can be uploaded to their Case Management System.
“I have completed my write up and assessment form - it was brilliant and accurate and really sped me up” Social Worker
Using AI to inform complex homelessness systems
A Churchill Fellowship report by Dr Ritvij Singh entitled ‘Artificial Intelligence: applications in homelessness’ argues that there are three main ways in which AI can be used in homelessness:
-
Early prediction
Identifying risk factors that may lead to homelessness and how long the situation may last -
Outreach and resource allocation:
Coordinating the support people experiencing homelessness receive from multiple channels -
Personalised decision-making
Helping social workers and other actors match services with a person’s specific needs
Use for AI in housing services
Planning
Optimising resource planning and forecasting risks and their implications
Updates
Regular and automated updates and responses to maintenance requests
Maintenance
Extracting and summarising maintenance requests at the front door
Preventative action
Preventative maintenance of properties, including boiler maintenance
Forecasting
Mould and damp detecting and predictive forecasting tool
Monitoring faults
Smart sensors, IoT devices and utilising these for monitoring faults
Solutions
These six uses have been based on extensive research into some of the key problems and where AI may provide solutions. This kind of AI application generally goes beyond a GenAI tool such as ChatGPT and would require specific implementation software e.g. Mobysoft’s RepairSense.
Risks and concerns
In a recent webinar, Dr Geoffrey Messier (University of Calgary) stated that using AI for the sake of it is a recipe for disaster. For AI to be effective, it must be a good fit for an identified problem and using AI inappropriately can lead to issues.
In Australia, a social worker used ChatGPT to write a child protection report, which ‘contained inaccurate information and downplayed the risks to the child.’ Users of GenAI should always check the accuracy of information produced. It is also important to remember that information entered into ChatGPT is then owned by the tool. Highly confidential and sensitive information about children or vulnerable adults should not be shared.
GenAI outputs are only as effective as the data sources they are trained to use and can sometimes deliver unreliable information that reinforces negative stereotypes or poor practice. In a blog from Connection at St Martin some examples were given about how AI described a ‘homeless person’…
“A homeless person looks dishevelled, with grimy clothes and unkempt hair. They move from place to place with all their possessions, often scavenging from bins. Their faces show a certain amount of sadness and loneliness with a broken spirit that tells a story of a difficult journey. There is often a sense of hopelessness about them, a feeling of being lost and out of place.”
This is a damaging and stigmatizing description and shows the risk of data bias. There are of course many strengths-based depictions of homelessness on the internet, but it is not known which data sources the AI tool has been trained on, how they have been trained, or how they have come to arrive at a certain output.
In 2023, US publication Mad in America reported that AI based programmes harm homeless populations. This included the Housing Allocation Algorithm (HAA), an AI-based decision support system, of which the following was written:
“Frontline workers voiced concerns about feeling powerless when algorithmic scores did not align with the actual needs of homeless individuals. This sense of powerlessness or frustration was exacerbated when supervisors and organizations discouraged workers from understanding or overriding the AI’s algorithmic score. It was suggested that final decisions and scoring should involve workers rather than relying solely on a potentially flawed AI system.”
And this gets to the heart of concerns around the use of AI in any kind of housing, health or social care setting. AI tools may indeed create useful solutions to some problems, but they will never be human. A person-centred and relational approach is crucial, and for that, people are needed.
Conclusions
As Dr Lígia Teixeira from Centre Homelessness Impact stated in her recent blog on AI:
“They hold the potential to build more resilient housing systems for the future.”
However, caution is needed as AI can lead to poor decision-making, draw unhelpful conclusions or even undermine professionals and person-centred approaches. AI does have huge potential and should be approached with a cautious curiosity, keeping in mind these six guiding principles.
-
Do not use AI for the sake of it
-
AI can only provide a solution to a clearly identified problem
-
The quality of the data will dictate the quality of the output
-
Not all AI is equal, find the right AI tool for the job
-
AI cannot be done in isolation, bring everyone in the organisation along on the journey
-
Humans will always be needed, no matter how good the AI tool is
Further references
For more information:
- AI policy template for charities
- AI guide
- AI and grant funding
- Good Things Foundation resources - Artificial Intelligence | National Digital Inclusion Network