RICS Responsible Use of AI: Explained for APC Candidates


During your RICS Final Assessment interview it is now highly likely you will be asked questions regarding the use of Artificial Intelligence and how as a chartered surveyor you will use AI responsibly. Following the introduction of the RICS Professional Standard: Responsible Use of Artificial Intelligence in Surveying Practice, as an APC Candidates you will need to demonstrate your knowledge and understanding of this document. Because this is an RICS Professional Standard, compliance with this document is mandatory, it is therefore so important to have a solid understanding of this document prior to your final assessment interview. This post is specifically designed to support RICS APC candidates with their revision on this new professional standard prior to sitting your final assessment interview, there is also a mock interview Q&A practice included at the end to test your knowledge.

The RICS professional standard, Responsible Use Of Artificial Intelligence in Surveying Practice was first introduced in September 2025 and becomes effective on 9th March 2026 whenever the use of Artificial Intelligence carries a material impact on the delivery of surveying services. Compliance with this document is mandatory for all RICS members and regulated firms.

Why was the professional standard, Responsible Use Of Artificial Intelligence in Surveying Practice Introduced?

The RICS recognises that the use of Artificial Intelligence is a powerful tool in surveying practice for purposes such as analysing data, identifying trends and supporting professional recommendations. However with all of the opportunities that AI generates, there are associated risks such as loss of confidential client data, inaccurate recommendations and bias outputs resulting from the use of poor data sets. These risks can harm clients and undermine public trust and confidence in the surveying profession. The RICS has introduced this standard to protect client’s from these risks and to maintain high standards of service and strong ethical practice when it’s members and firms adopt the use of Artificial Intelligence that carries a material impact. This requires members and firms to maintain high levels of professional scepticism, professional skill and judgement, communication and transparency with clients, strong data protection and risk management procedures. As an overarching principle this professional standard sets increased requirements for human oversight and intervention when adopting the use of AI to safeguard against complacency and over reliance on AI systems.

What are the different types of AI Systems?

Before attending your final assessment interview, you will be expected to have a basic understanding and awareness of the different types of AI systems and their limitations. The different types of AI systems can be briefly summarised into:

General AI – AI systems that can perform a wide range of intellectual tasks rather than being limited to one specific task. General AI could write a cost plan or diagnose a building defect without specific training. They can learn from experience and transfer knowledge between domains.
Narrow AI – specialised AI tools that are good at performing one task for example e-mail spam filters or voice assistants to play songs or set timers, they perform specific tasks but not human level reasoning. It cannot be adopted for new tasks and is limited to the specific task it is intended.

What are the limitations of AI?

AI systems are typically trained on large data sets and identify patterns and correlations. They are able to produce predictions, recommendations, classifications and generate content. They do not understand context in the way that humans do. For example if an AI tool suggests a reinstatement cost, it will do so using historical data and does not provide an understanding of the specific site conditions, client objectives or current market factors. AI systems generate their outputs based on patterns in their training data rather than professional judgement, human level reasoning and understanding.

In summary the limitations of AI include:

  • Dependence on the quality of training data.
  • Inability to deal with novel or unusual scenarios.
  • Lack of real-world understanding.
  • Difficulty interpreting nuanced professional judgement.

Surveying Examples:

  • An AI valuation tool trained on historic data may struggle in volatile or abnormal market conditions.
  • An AI document summary may miss critical caveats or assumptions.
  • An AI defect-detection tool may fail to identify issues outside of its training set.

AI Failure Modes & Erroneous Outputs

The RICS expects chartered surveyors and regulated firms to have an understanding and awareness of the different failure modes and erroneous outputs of AI systems. A failure mode is a particular way in which an AI system might fail to perform its function. Erroneous outputs from AI can often go undetected as they don’t typically produce obvious error messages, they can often fail confidently and plausibly. Common AI failure modes can include over-generalising recommendations and summaries from limited data sets, repeating errors at scale when automated and producing bias results when trained on inaccurate or weighted data sets.

Surveying Examples:

  • AI systems inventing lease clauses that do not exist.
  • AI systems misclassifying a defect due to poor image quality.

Bias Risk Within AI Systems

The RICS expects chartered surveyors and regulated firms to have an understanding and awareness of the different bias risks within AI systems. AI systems can inherit bias from their training data, their design, their developers and how they are utilised. This bias can be statistical, social, geographic or economic and can lead to unfair outcomes, poor advice, reputational damage and legal implications.

Surveying Examples:

  • A valuation AI tool trained on prime urban assets may misprice rural or sub prime assets.
  • A construction cost model trained on historic projects may underrepresent modern sustainability requirements.

Data Risks Within AI Systems

The RICS also expects chartered surveyors and regulated firms to have an understanding and awareness of the different data security risks within AI systems. This is regarded as one of the highest risk areas concerning the use of AI as they typically require large amounts of data to be shared and uploaded with third party platforms. Processing these large data sets creates risks around confidentiality, data protection, data retention and unauthorised use of data.

Surveying Example

  • Uploading information into AI systems without the consent of a client, this could be sensitive financial data, lease documents and personal information which can breach data protection laws, client confidentiality and professional standards.

Maintaining Professional Scepticism

A key term of reference that candidates need to be aware of within the professional standard is Professional Scepticism. This is based on having a questioning mentality where surveyors are expected to critically assess evidence and remain alert to conditions that might make information misleading. As a chartered surveyor, you are expected to understand the risks, limitations and responsibilities associated with AI use as the RICS expects its members to be informed, sceptical and responsible users of AI systems. Data must be checked and cleaned, outputs should be tested under multiple scenarios with constant review and human professional judgement needed.

Practice Management Requirements

The RICS are aware that successful candidates can walk out of their interview as a chartered surveyor and setup a professional practice so you need to demonstrate how a professional practice would use AI responsibly. The RICS professional standard: Responsible Use of AI in Surveying Practice uses the principles below to ensure regulated firms are using AI transparently, mitigating risks and maintaining human oversight. Again this is essential knowledge for all APC candidates.

Essential Practice Management When Using AI:

  • Data Governance – Effective data governances requires RICS regulated firms to safeguard sensitive and confidential data. Regulated firms may adopt encrypted cloud storage, access logs and two factor authentication to achieve effective data governances. Firms need to keep in mind that traditional surveying software typically stores data, processes data and their outputs may be easier to control. AI systems may however learn from data, retain data and share data across multiple systems via third party servers that are outside of your jurisdiction. Regulated firms may need to seek assurances from third party platforms around how sensitive data is managed and seek consent for data to be shared. This means that data Governance is a greater risk when using AI and firms must have strong procedures in place to prevent the loss or unpermitted sharing of sensitive data.
  • Anonymisation – To protect sensitive data sets it is also advisable for regulated firms to ensure this is anonymised prior to use with AI. For example when uploading a lease document or cost report, it may be advisable to redact names, remove addresses, strip financial identifiers and where possible ensure that data is fully anonymised before being used in AI systems to reduce confidentiality risk.
  • Annual Training – To achieve effective practice management, the professional standard also requires regulated firms to provide training which can include guidance around which data can be uploaded, what data must not be uploaded, how AI tools retain or reuse data and what to do if they are unsure. For example cost managers can be trained not to upload tender returns, contractor pricing documents or commercially sensitive benchmarks. A surveyor that uploads a sensitive tender return may encounter legal issues around a lack of consent, unknown data retention and unknown jurisdictions. They must be sure to obtain written client consent, assess AI provider risks, confirm data handling terms and record their decision.
  • System Governance – The professional standard also sets out requirements for effective system governance which requires a justification process on whether the use of AI is suitable. This is needed as the use of AI introduces speed, scale and automation however because of this there is potential for amplified errors, reputational risk and reduced transparency. Key assessment criteria for RICS regulated firms to consider include the surveying services being delivered, the nature of the task, alternative tools, environmental impact, stakeholder impact, bias and risk of errors.
  • Risk Management – Another key factor of effective practice management under the RICS Professional Standard: Responsible Use of Artificial Intelligence in Surveying Practice is maintaining a formal AI Risk Register. AI risks are not isolated or one-off issues. They are systemic meaning one issue can affect multiple instructions, they are scalable meaning errors can be replicated across projects quickly and they are dynamic meaning AI systems can evolve, update and retrain over time. Because of this, a structured and regularly reviewed risk register is essential. This centralises knowledge, supports compliance and demonstrates to assessors that you understand AI is a regulated professional risk, not simply a technical tool to be relied upon at face value.
  • Responsible Use Policies – The professional standard also requires RICS regulated firms to adopt responsible use policies when using AI with a material impact. Responsible use policies aim to prevent unsupervised AI use, clarify accountability, reinforce human oversight, protect client confidentiality and support regulatory compliance. They ensure AI is used consistently, proportionately and transparently.
  • Human Oversight – Human oversight is arguably one of the most important requirements within the professional standard. The RICS makes it clear that AI must not replace professional skill and judgement. Instead, AI should enhance efficiency while ultimate responsibility remains with a competent surveyor. Oversight mechanisms may include detailed output reviews, spot checks, dip sampling, peers reviews, comparison against non-AI generated outputs and validation against market evidence.

Implementing the Standard

  • Procurement – When procuring 3rd Party AI, practices should undertake due diligence on environmental impact, ensure compliance with legislation, understand risks within the AI such as data bias and its limitations. Practices should also document who developed the AI, record any risks within the AI risk register, understand accuracy of the outputs and legal and confidentiality requirements.
  • In Use – When using AI maintaining professional scepticism is critical, practices need to constantly ensure that AI outputs are reliable and accurate, surveyors need to apply their professional judgement, skill and experience. Any concerns around the accuracy of outputs and suitability of use of AI should be documented along with judgements on any concerns and whether the outputs are fit for purpose. If they are deemed not fit for purpose, this must be communicated to the client to ensure transparency. Regular sample checks should be undertaken to ensure quality.
  • Transparency with clients – When using AI with a material impact, clients must be notified in advance, this must be clearly set out within the terms of engagement along with the clients opt out rights, information about the AI being used, what it will be used for, how risks are mitigated and processes for keeping the client informed.
  • Explainability – AI risk registers, assessment of suitability, responsible use policies and human oversight must be documented and maintained to create an audit trail that AI is being used responsibly. Working limitations, suitability decisions must all be recorded as clients have a right to understand how the advice they are receiving has been generated to maintain confidence.

Conclusion & Summary

Artificial Intelligence is now a regulated professional risk. AI outputs can be biased, incomplete or unreliable and AI cannot replace the professional skill and judgement of qualified and competent surveyors. Because of this, surveyors need to remain fully accountable for the advice they have provided.

Responsible use policies, risk registers and human oversight mechanisms exist to:

  • Protect clients.
  • Maintain public trust in the profession.
  • Safeguard professional standards.
  • Ensure compliance with the RICS Professional Standard: Responsible Use of Artificial Intelligence in Surveying Practice.

Essential APC Checklist

Before your final assessment, ensure you understand:

  • AI Risk Register requirements.
  • Assessment of suitability before use.
  • Responsible use policies.
  • Data governance and anonymisation.
  • Transparency obligations to clients.
  • Professional Indemnity Insurance implications.
  • Compliance with relevant legislation.

AI must never be used as a shortcut. It must be treated as a regulated professional risk requiring structured governance, transparency and professional sign-off. If you can confidently articulate the above, you will demonstrate to the assessors that you understand not only the opportunities of AI but your professional responsibilities when using it.

RICS Responsible Use of Artificial Intelligence Mock Interview

Question & Answer Practice

The following questions and answers are aimed to test your knowledge of the RICS Professional Standard ‘Responsible Use of Artificial Intelligence In Surveying Practice’ prior to attending your final assessment interview.

Question) If you are successful in getting chartered today, can you please explain your approach of how you would use AI responsibly?

Answer) I would follow the guidance set out within RICS Professional Standard: Responsible Use of Artificial Intelligence In Surveying Practice. This would required:

  • Transparency with clients – When using AI with a material impact, this would need to be communicated and agreed with clients in advance. Any opt out rights and details of the AI being used would need to be communicated with the terms of service. 
  • Professional Scepticism – I would maintain professional scepticism when using AI which is based on having a questioning mentality and critically assessing AI outputs to ensure they are accurate rather than solely relying on them at face value.
  • Suitability Assessments – Prior to using AI, a full suitability assessment would need to be carried out to consider the nature of the task, alternative tools, risk of bias, risk of errors and environmental impact.
  • Practice Management – Effective practice management must be adopted which should include the use of an AI risk register, responsible use policies and an AI systems register.

Question) How frequently should you review your firms AI Risk register?

Answer) The RICS Professional Standard: Responsible Use of AI In Surveying Practice Recommends AI Risk Registers are reviewed on a quarterly basis. This is because AI evolves quickly and what may have been acceptable 6 months ago may not be acceptable now. AI systems carry greater levels of risk because they are systematic, they adapt over time, use large amounts of data and may affect multiple projects, customers and stakeholders. Because of the high level of risk, AI risk registers must be maintained and reviewed on a quarterly basis

Question) What processes would you need to setup as an RICS regulated firm using AI?

Answer) RICS Regulated Firms need to setup the following when using AI with a material impact:

  • Work inline with the RICS professional standard: Responsible Use of AI in Surveying Practice.
  • Maintain professional scepticism based on having a questioning mentality and critically assessing AI to ensure they are accurate rather than solely relying on their outputs at face value.
  • Transparency around the use of AI should be maintained with clients in the terms of service with clear opt out rights.
  • Undertake full suitability assessments to determine whether AI should be used, there is a need to consider the services being delivered, stakeholder impact, alternative tools, environmental impact, risk of bias & errors.
  • Data Governance procedures that safeguard against the loss or unauthorised sharing of sensitive data which includes secure storage, data anonymisation, annual training, obtaining client consent and assessing third party AI systems.
  • AI Risk Registers need to be reviewed on a quarterly basis along with setting up responsible use policies.
  • Human oversight should be maintain as AI must not replace professional skill and judgement. Instead, AI should enhance efficiency while ultimate responsibility remains with a competent surveyor.
  • Maintain appropriate PII Insurance with AI cover.

Question) What factors would you need to consider when using AI to ensure you comply with the RICS rules of conduct?

Answer)

  • Rule 1 Acting with Honesty & Integrity – AI use must be agreed and communicated in advance with clients. Clear opt out rights and details of AI being used must be included within the terms of service
  • Rule 2 Acting with Competence & Rule 3 Providing A High Standard of Service – Maintaining professional scepticism and Human Oversight is key. AI outputs must be constantly reviewed and tested for accuracy. Because AI evolves quickly, AI risk registers need to be reviewed on a quarterly basis. Output reviews, peer reviews and comparison against non AI outputs can also be used to check AI outputs.  A full suitability review must also be carried out to ensure outputs are provided to a high standard.
  • Rule 4 Treating Others With Respect & Rule 5 Maintaining Public Confidence In the Profession – Fully transparency around the use of AI must be provided to clients. Use of AI must be agreed in advance with clear opt out rights and details of the AI also referenced within the terms of service. Due to the data risks around using AI, we must act to protect clients sensitive data and ensure this is not lost or shared without their express permission. Strong data governance is needed to protect client’s sensitive data with procedures such as data anonymisation, secure data storage and vetting of third party AI providers to protect client’s data. 

Question) What are the clients’ rights to explainability if you are using AI?

Answer) Clients may seek to obtain further information about the use of AI by RICS members and regulated firms. RICS-regulated firms must be able to provide, on request in writing:

  • The type of AI being used.
  • The basic ways of working and limitations of the AI.
  • The due diligence carried out before using the AI system.
  • The way relevant risks associated with the use of the AI are identified and managed.
  • The decisions made about the reliability of the outputs from the AI.

Question) What must you include within your terms of service if using AI?

Answer) The following items be must shown within the terms of service when using AI with a material impact, this must be done in writing and prior to the delivery of the firms surveying services:

  • When and for what purpose AI is to be used.
  • When AI will be involved in the delivery of a surveying service.
  • The parts of the process for delivery of a surveying service in which AI will be involved.
  • The extent of professional indemnity cover for use of AI systems by the firm.
  • The internal processes to contest the use of an AI system.
  • The processes to seek redress if a client feels they have been negatively affected by the use of an AI system.
  • How a client can opt out of the use of AI systems in the delivery of a surveying service, if at all.

Question) Can you please explain your understanding of the term professional scepticism?

Answer) This is based on having a questioning mentality where you critically assess evidence and information rather than accepting things at face value. It is also important to remain alert to conditions that might make information misleading. It is an important principle to maintain as erroneous outputs from AI can be repeated at scale and can appear completely plausible. It is also sometimes very difficult to determine how the outputs generated by AI have been calculated meaning that constant reviews of accuracy and testing of outputs in different scenarios is needed.

Question) Are you aware of any documents the RICS has produced on the use of Artificial Intelligence?

Answer) The RICS have produced the Professional Standard, ‘Responsible use of artificial intelligence in surveying practice’. This is currently in its 1st Edition and is effective from 9th March 2026.

Question) Can you please provide an overview of the RICS Professional Standard ‘Responsible use of artificial intelligence in surveying practice’?

Answer) It sets the baseline professional standards for RICS members and regulated firms using AI systems in their work. It provides the basis for:

  • Upskilling the profession.
  • Aims to minimise the risk of harm caused by AI systems in the delivery of services.
  • Enables informed and clear decisions to be made on AI procurement and reliance on AI outputs.
  • Sets the baseline for good communication and information sharing with clients and other relevant stakeholders concerning the adoption of AI.
  • Provides a framework for the responsible development of AI systems by members and regulated firms.

Question) What does “material impact” mean in the context of AI use?

Answer) An output has a material impact if the use of AI is capable of influencing the delivery of the service. Typically, AI outputs that have a material impact on the delivery of a service will be outputs that affect how the work of the surveyor is rendered meaningful. For example outputs summarising documents that are then relied on when writing a report, outputs composing all or the significant parts of an opinion or outputs recommending which part of a building to investigate for a fault can be considered to have a material impact on the delivery of the service.

Question) What must a member do if AI use has a material impact on service delivery?

Answer) They must make a written record of that determination and the reasoning behind it.

Question) What are the data governance requirements for firms using AI?

Answer) Firms must safeguard private and confidential data by:

  • Secure storage (e.g. encryption/backups).
  • Restricting access.
  • Annual staff training.
  • Anonymising data.
  • Avoiding uploading confidential data unless written consent is obtained and the risks are verified as acceptable.

Question) What must be recorded before using any AI system with material impact?

Answer) A written assessment of whether AI is the most appropriate tool, considering:

  • The nature of the task.
  • Alternative tools availble.
  • Environmental impact.
  • Stakeholder impact.
  • Data risks.
  • Risk of erroneous / bias outputs and their consequences.

Question) What must a written decision on AI output reliability include?

Answer) A written decision must include:

  • The key assumptions made.
  • Key concerns and reasons.
  • How concerns might be reduced.
  • Impact on overall reliability.
  • Whether the output can be used for its intended purpose.
  • It must be prepared under the supervision of a qualified surveyor.

Question) When must clients be informed that an AI output cannot be relied upon?

Answer) Clients must be informed when the member determines an output cannot be used for its intended purpose, they must inform the client in writing, with reasoning or a summary.

Question) What AI-related information must be included in terms of engagement?

Answer) The following items must be included within the terms of engagement:

  • When and where AI is used.
  • Details of the PII cover in place.
  • How the client can contest the use of AI.
  • How to opt out of AI use.

RICS APC Q&A STUDY GUIDES

Jon Henry Baker

Jon Henry Baker is a Senior Chartered Quantity Surveyor with over 15 years industry experience working on Commercial, Retail, Education, Infrastructure and Industrial Projects in the UK and Ireland. Over the last 9 years he has coached many colleagues and helped them to pass their APC. He is passionate about making the APC a smooth and enjoyable process for candidates and is also the Author of 'RICS APC STUDY GUIDE, 1000+ Questions & Answers'.

Recent Posts

.