TITLE 1. ADMINISTRATION

PART 10. DEPARTMENT OF INFORMATION RESOURCES

CHAPTER 218. DATA GOVERNANCE AND MANAGEMENT

The Texas Department of Information Resources (department) proposes amendments to 1 Texas Administrative Code (TAC) Chapter 218, Subchapter B, §218.10, and Subchapter C, §218.20. The proposed amendments align these rules with the House Bill 1500 [89th Session (Regular)] amendments to Texas Government Code § 2054.515, which identify the data governance assessment as a distinct report and clarifies the reporting requirements for this assessment.

The department proposes amendments to §218.10, for state agencies, and §218.20, for institutions of higher education. The amendments to these sections align the reporting requirements and deadlines for the data governance assessment with those found at Texas Government Code § 2054.515 and standardize the assessment tool used by state agencies to ensure the department's ability to collect and report upon the data as contemplated by Texas Government Code Chapter 2054. The department further proposes amendments to these sections that establish the data governance assessment as a discrete report separate from the information security assessment.

The department proposes amendments to §218.20 removing the clarification that the data maturity assessment is considered an information security standard, and, as such, the requirement for public junior colleges to comply with this requirement subject to Texas Government Code § 2054.0075.

There is no economic impact on rural communities, small businesses, or microbusinesses as a result of enforcing or administering the amended rule as proposed.

The amendments to this chapter only apply to state agencies and institutions of higher education.

The assessment of the impact of the proposed changes on institutions of higher education was prepared in consultation with the Information Technology Council for Higher Education (ITCHE) in compliance with Texas Government Code § 2054.121(c). DIR submitted the proposed amendments to the Information Technology Council of Higher Education for their review. DIR determined that there was no direct impact on institutions of higher education as a result of the proposed rules.

Neil Cooke, the Chief Data Officer, has determined that there will be no fiscal impact upon state agencies, institutions of higher education, and local government during the first five year period following the adoption of the proposed amendments. State agencies are required by Texas Government Code § 2054.515(a) to complete an assessment of its data governance program; the proposed amendments simply clarify the reporting requirements and establish the data governance assessment as a distinct report in alignment with House Bill 1500 [89th Legislative Session (Regular)]. As such, there is no fiscal impact as a result of the administrative rule. Mr. Cooke has further determined that for each year of the first five years following the adoption of the amended 1 Texas Administrative Code Chapter 218, there are no anticipated additional economic costs to persons or small businesses required to comply with the amendments and proposed new rules.

Pursuant to Texas Government Code § 2001.0221, the agency provides the following Governmental Growth Impact Statement for the proposed amendments. The agency has determined the following:

1. The proposed rules neither create nor eliminate a government program. Texas Government Code § 2054.515 requires state agencies to complete a data maturity assessment and submit it to the department. The proposed amendments merely administer the minimum requirements for this assessment and establishes the procedures to submit it to the department.

2. Implementation of the proposed rules does not require the creation or elimination of employee positions. There are no additional employees required nor employees eliminated to implement the rule as proposed.

3. Implementation of the proposed rules does not require an increase or decrease in future legislative appropriations to the agency. There is no fiscal impact as implementing the rule does not require an increase or decrease in future legislative appropriations.

4. The proposed rules do not require an increase or decrease in fees paid to the agency.

5. The proposed rules do not create a new rule.

6. The proposed rules do not repeal an existing regulation.

7. The proposed rules do not increase or decrease the number of individuals subject to the rule's applicability. Texas Government Code § 2054.515 requires state agencies to complete the data maturity assessment; Texas Government Code Chapter 2054 establishes the parameters of the term "state agency," which identifies the entities that are subject to the amended rule sections' requirements.

8. The proposed rules do not positively or adversely affect the state's economy. The creation of rules establishing minimum requirements for an entity's data maturity assessment ensures that state agencies are scrutinizing their data governance program to ensure rigorous security standards and alignment with best practices.

Written comments on the proposed rules may be submitted to Christi Koenig Brisky, Assistant General Counsel, 300 West 15th Street, Suite 1300, Austin, Texas 78701, or to rules.review@dir.texas.gov. Comments will be accepted for 30 days after publication in the Texas Register.

SUBCHAPTER B. DATA GOVERNANCE AND MANAGEMENT FOR STATE AGENCIES

1 TAC §218.10

The amendments are proposed pursuant to Texas Government Code § 2054.052(a), which authorizes the department to adopt rules as necessary to implement its responsibilities under Texas Government Code Chapter 2054, and Texas Government Code § 2054.515(c) which admonishes the department to establish the data maturity assessment requirements by rule.

No other code, article, or statute is affected by this proposal.

§218.10. Data Maturity Assessment.

(a) A state agency shall conduct a biennial data maturity assessment and submit the report of the assessment results in compliance with Texas Government Code § 2054.515. [by November 15 of each even-numbered year, December 1 of the year in which the agency completes the assessment, or the 60th day after the agency completes the assessment, whichever comes first.]

(b) The data maturity assessment tool promulgated by the department shall include at least the below elements:

(1) Data Architecture;

(2) Data Analytics;

(3) Data Governance and Standardization;

(4) Data Management and Methodology;

(5) Data Program Management and Change Control;

(6) Data Quality;

(7) Data Security and Privacy;

(8) Data Strategy and Roadmap;

(9) Master Data Management; [and]

(10) Metadata Management; and[.]

(11) Texas Open Data Portal.

(c) State agencies shall [may] complete their data maturity assessment through the [a] method prescribed [identified] by the department. [or by using their own tool that includes the elements required by subsection (b) of this section.]

(d) The data maturity assessment completed pursuant to this subsection addresses the requirement to review an agency's data governance program found in Texas Government Code § 2054.515[(a)(2)].

(e) [(e) To comply with Texas Government Code § 2054.515(a), a state agency must complete a data maturity assessment that is compliant with this section in addition to addressing all information security assessment requirements enumerated in 1 Texas Administrative Code Chapter 202.]

The agency certifies that legal counsel has reviewed the proposal and found it to be within the state agency's legal authority to adopt.

Filed with the Office of the Secretary of State on October 23, 2025

TRD-202503822

Joshua Godbey

General Counsel

Department of Information Resources

Earliest possible date of adoption: December 7, 2025

For further information, please call: (512) 475-4531


SUBCHAPTER C. DATA GOVERNANCE AND MANAGEMENT FOR INSTITUTIONS OF HIGHER EDUCATION

1 TAC §218.20

The amendments are proposed pursuant to Texas Government Code § 2054.052(a), which authorizes the department to adopt rules as necessary to implement its responsibilities under Texas Government Code Chapter 2054, and Texas Government Code § 2054.515(c) which admonishes the department to establish the data maturity assessment requirements by rule.

No other code, article, or statute is affected by this proposal.

§218.20. Data Maturity Assessment.

(a) An institution of higher education shall conduct a biennialdata maturity assessment and submit the report of the assessment results in compliance with Texas Government Code § 2054.515. [by November 15 of each even-numbered year, December 1 of the year in which the institution of higher education completes the assessment, or the 60th day after the institution of higher education completes the assessment, whichever comes first.]

(b) An institution of higher education's data maturity assessment shall include at least the below elements:

(1) Data Architecture;

(2) Data Analytics;

(3) Data Governance and Standardization;

(4) Data Management and Methodology;

(5) Data Program Management and Change Control;

(6) Data Quality;

(7) Data Security and Privacy;

(8) Data Strategy and Roadmap;

(9) Master Data Management; [and]

(10) Metadata Management; and[ .]

(11) Texas Open Data Portal.

(c) Institutions of higher education shall [may] complete their data maturity assessment through the [a] method prescribed [identified] by the department. [or by using their own tool that includes the elements required by subsection (b) of this section.]

(d) The data maturity assessment completed pursuant to this subsection addresses the requirement to review an institution of higher education's data governance program found at Texas Government Code § 2054.515[(a)(2)].

[(e) To comply with Texas Government Code § 2054.515(a), an institution of higher education must complete a data maturity assessment that is compliant with this section in addition to addressing all information security assessment requirements enumerated in 1 Texas Administrative Code Chapter 202.]

[(f) To the extent that the data maturity assessment is an element of the information security assessment required by Texas Government Code § 2054.515 and codified at 1 Texas Administrative Code Chapter 202, it is an information security standard to which a public junior college is subject pursuant to Texas Government Code § 2054.0075.]

The agency certifies that legal counsel has reviewed the proposal and found it to be within the state agency's legal authority to adopt.

Filed with the Office of the Secretary of State on October 23, 2025

TRD-202503823

Joshua Godbey

General Counsel

Department of Information Resources

Earliest possible date of adoption: December 7, 2025

For further information, please call: (512) 475-4531


CHAPTER 219. ARTIFICIAL INTELLIGENCE

The Texas Department of Information Resources (department) proposes the creation of 1 Texas Administrative Code (TAC) Chapter 219, Subchapter A, §219.1 and §219.11 and Subchapter B, §§219.20 - 219.24. This proposed chapter addresses the Senate Bill 1964 [89th Session (Regular)] requirements for the department to create rules establishing an artificial intelligence (AI) code of ethics and minimum risk management and governance standards for the development, procurement, deployment, and use of heightened scrutiny AI systems by certain governmental entities in addition to any other rules necessary to implement Texas Government Code Chapter 2054, Subchapter S.

Within Subchapter A, the department proposes the creation of §219.1 and §219.11. Section 219.1 introduces specialized definitions required by the rule, including incorporating statutory references to terms already defined by state law, such as AI system, heightened scrutiny AI system, controlling factor, and consequential decision. Section 219.11 establishes the AI code of ethics required by Senate Bill 1964 [89th Session (Regular)], which details the ethical guidelines for the procurement, development, deployment, or use of AI systems by governmental entities. Under Senate Bill 1964 [89th Session (Regular)], state agencies, including institutions of higher education, and local governments are required to adopt the AI Code Ethics established by Section 219.11

The department proposes the creation of subchapter B, §§219.20 - 219.24. This subchapter establishes the minimum standards required by Senate Bill 1964 [89th Session (Regular)] as recognized by Section 219.20. Specifically, this subchapter establishes AI risk management standards in Section 219.21, the mandatory risk assessment considerations required when a state agency or local government develops, procures, deploys, or uses a heightened scrutiny AI system in Section 219.22, and the AI impact assessment required of a state agency when it or a vendor on its behalf deploys or uses a heightened scrutiny AI system in Section 219.23. As required by Senate Bill 1964 [89th Session (Regular)], Section 219.24 establishes the guidelines for risk management frameworks, acceptable use policies, employee training, and unlawful harm risk mitigation when deploying heightened scrutiny AI systems.

There is no economic impact on rural communities or small businesses as a result of enforcing or administering the rule as proposed.

The proposed rule applies to state agencies, institutions of higher education, and, in limited scope as required by Senate Bill 1964 [89th Session (Regular)], local governments, a term which may include approximately 1,100 rural communities as defined by Texas Government Code § 2006.001(1-a). It does not apply to small businesses or micro-businesses. As a result, there is no economic impact on small businesses or micro-businesses as a result of enforcing or administering the amended rule as proposed.

There is no adverse economic impact to rural communities as a result of the proposed rule. Previously, there were no standardized ethical expectations or risk management requirements for the use of AI by Texas governmental entities. As a result, state agencies and local governments, including rural communities, have implemented patchwork solutions that may result in inconsistent policies surrounding the ethical use of AI. With the passage of Senate Bill 1964 [89th Session (Regular)], state agencies, institutions of higher education, and local governments, including rural communities as defined by Texas Government Code § 2006(1-a), must establish uniform minimum ethical standard for the use of AI within their organization. These guidelines are common sense principles that align with general standards surrounding the ethical and secure use of AI by the government. Implementing the code of ethics and ethical principles will serve to ensure the ongoing security of the State of Texas. In addition, Section 219.22 details the considerations a local government must undertake prior to the development, procurement, deployment, or use of a heightened scrutiny AI system. Given the specific statutory definition for heightened scrutiny AI systems, it is unlikely that a rural community has such a system at this time. To the extent that a rural community does implement such a system in future, the required risk assessment includes standard considerations similar to those required for the implementation of any other IT system. The department discussed the implications of Senate Bill 1964 [89th Session (Regular)] with local governments prior to the passage of this legislation)] to ensure that there was no adverse impact to and the least administrative burden upon local governments, including rural communities. As a result and due to the limited scope of these administrative rules, there is no adverse impact to rural communities.

The assessment of the impact of the proposed changes on institutions of higher education was prepared in consultation with the Information Technology Council for Higher Education (ITCHE) in compliance with Texas Government Code § 2054.121(c). DIR submitted the proposed amendments to the Information Technology Council of Higher Education for their review. ITCHE members identified several impacts to institutions of higher education. Upon review, these impacts are a direct result of the requirements of Senate Bill 1964 [89th Legislative Session (Regular)] rather than those imposed by the proposed rule. As a result, DIR determines that there is no impact upon institutions of higher education as a result of the proposed rules.

Jennie Hoelscher, Privacy Officer, has determined that there will be no fiscal impact upon state agencies, institutions of higher education, and local government during the first five year period following the adoption of the proposed amendments. The creation of rules administering the requirements of Texas Government Code Chapter 2054, Subchapter S, including the creation of a statewide AI System Code of Ethics, minimum risk management and governance standards for the development, procurement, deployment, and use of heightened scrutiny AI systems by governmental entities, and guidelines for frameworks, policies, and trainings, is in compliance with the department's specific rulemaking authority granted by Senate Bill 1964 [89th Session (Regular)] and addresses the statutory requirements imposed upon the department without resulting in a fiscal impact as a result of the administrative rule. Ms. Hoelscher has further determined that for each year of the first five years following the adoption of new 1 TAC Chapter 219, there are no anticipated additional economic costs to persons or small businesses required to comply with the amendments and proposed new rules.

Pursuant to Texas Government Code § 2001.0221, the agency provides the following Governmental Growth Impact Statement for the proposed amendments. The agency has determined the following:

The proposed rules neither create nor eliminate a government program. Texas Government Code Chapter 2054, Subchapter S, requires the department to identify ethical guidelines governing AI use by governmental entities and establish standards for implementing heightened scrutiny AI systems. The proposed rules merely administer the minimum requirements for this assessment.

Implementation of the proposed rules does not require the creation or elimination of employee positions. There are no additional employees required nor employees eliminated to implement the rule as proposed.

Implementation of the proposed rules does not require an increase or decrease in future legislative appropriations to the agency. There is no fiscal impact as implementing the rule does not require an increase or decrease in future legislative appropriations.

The proposed rules do not require an increase or decrease in fees paid to the agency.

The proposed rules create a new rule chapter that addresses the statutory requirements for administrative rules imposed upon the department by Senate Bill 1964 [89th Session (Regular)].

The proposed rules do not repeal an existing regulation.

The proposed rules do not increase or decrease the number of individuals subject to the rule's applicability. The entities to whom these rules apply are identified by Texas Government Code Chapter 2054, Subchapter S.

The proposed rules do not positively or adversely affect the state's economy. The creation of rules establishing an AI code of ethics and minimum standards for heightened scrutiny AI systems ensures that governmental entities are scrutinizing their use of AI to ensure rigorous security standards and alignment with best practices.

Written comments on the proposed rules may be submitted to Christi Koenig Brisky, Assistant General Counsel, 300 West 15th Street, Suite 1300, Austin, Texas 78701, or to rules.review@dir.texas.gov. Comments will be accepted for 30 days after publication in the Texas Register.

SUBCHAPTER A. CODE OF ETHICS AND GENERAL INFORMATION

1 TAC §219.1, §219.11

The new rules are proposed pursuant to Texas Government Code § 2054.052(a), which authorizes the department to adopt rules as necessary to implement its responsibilities under Texas Government Code Chapter 2054; Texas Government Code § 2054.703, which requires the department to establish minimum standards for heightened scrutiny AI systems; and Texas Government Code § 2054.702, which requires the department to establish an AI system code of ethics.

No other code, article, or statute is affected by this proposal.

§219.1. Definitions.

The following words and terms, when used in this chapter, shall have the following meanings, unless the context clearly indicates otherwise.

(1) AI--Artificial Intelligence

(2) Artificial Intelligence System--As defined by Texas Government Code § 2054.003(1-a).

(3) Consequential Decision--As defined by Texas Government Code § 2054.003(2-b).

(4) Controlling Factor--As defined by Texas Government Code § 2054.003(2-c).

(5) Department--The Department of Information Resources.

(6) Executive Head--The top-most senior executive with operational accountability for a state agency or local government.

(7) Governmental Entities--State agencies, including institutions of higher education, and local governments.

(8) Heightened Scrutiny Artificial Intelligence System--As defined by Texas Government Code § 2054.003(6-a).

(9) Information Resources--As defined by Texas Government Code § 2054.003(7).

(10) Information Resources Technologies--As defined by Texas Government Code § 2054.003(8).

(11) Local Government--As defined by Texas Government Code § 2054.003(9).

(12) Personal Identifying Information (PII) --As defined by Texas Business & Commerce Code § 521.002(a)(1).

(13) Principal Basis--As defined by Texas Government Code § 2054.003(11).

(14) State Agency--As defined by Texas Government Code § 2054.003(13).

(15) Unlawful Harm--As defined by Texas Government Code § 2054.701.

§219.11. Code of Ethics and the Ethical Principles of Artificial Intelligence.

(a) As required by Texas Government Code § 2054.702, state agencies and local governments shall adopt the AI Code of Ethics established by this section and follow the ethical principles included herein as they procure, develop, deploy, or use artificial intelligence systems.

(b) Preamble

(1) AI systems have the potential to transform the way our state and local governments serve Texans. AI systems can create efficiencies, support economic and scientific advancement, and improve the safety and well-being of our communities. The State of Texas supports the use of AI systems by governmental entities to improve the services they deliver to Texans and to lead in innovative AI adoption in the public sector.

(2) While they have significant potential value, AI systems also pose substantial risks if not implemented ethically and responsibly. AI risks vary based on the system involved, how it is used, and who uses it. AI systems are often trained on large amounts of data from a variety of sources, which can lead to inaccurate outputs. To the extent that AI systems are trained on or used to process PII, they raise significant privacy concerns. Malicious actors can utilize AI to develop more advanced cyberattacks, bypass security measures, and exploit vulnerabilities in systems. These and other AI risks make it a uniquely challenging technology for governmental entities to use safely, but with appropriate guardrails, governmental entities can limit the risks of AI and secure its many benefits for Texans.

(3) Governmental entities must limit the potential harm of AI systems by managing risk and prioritizing trustworthy and responsible development and deployment of AI consistent with the National Institute of Standards and Technology AI Risk Management Framework. Creating trustworthy AI requires balancing each of these principles based on the identified risks of an AI system and the context in which it is used.

(4) This section articulates the principles of ethical AI implementation that governmental entities must strive for when procuring, developing, designing, or using AI systems.

(c) Human Oversight and Control

(1) Human oversight plays a crucial role in ensuring that AI systems operate ethically. While AI can analyze vast amounts of data much faster--and sometimes more accurately--than humans, it lacks the human judgment necessary to ensure that its decisions align with societal values and the rights granted to individuals under the law. Ensuring human control over AI systems mitigates risks of inaccurate or undesirable outputs and allows for revision of the rules established during development of the system and to the data that supports the system's decision-making.

(2) Governmental entities:

(A) Must deploy AI systems in ways that enable humans to review and analyze inputs and outputs at appropriate intervals throughout the AI lifecycle;

(B) May incorporate a level of human oversight reasonably commensurate to the risks associated with a particular AI system, with heightened scrutiny AI systems requiring increased human oversight relative to lower risk systems; and

(C) Must ensure AI systems can be disabled until harmful or inaccurate decision making can be remedied.

(d) Fairness

(1) The data used to develop AI systems must adequately represent the subjects or people about which AI systems make judgments, decisions, or predictions. Incomplete or inaccurate data can result in unlawful harm.

(2) Governmental entities:

(A) Must ensure their use of AI systems does not infringe upon the legally protected rights and liberties of the individuals they serve or result in unlawful harm; and

(B) Must implement data governance practices for AI systems throughout the AI system's lifecycle to ensure fairness.

(e) Accuracy

(1) While AI systems are overall improving in their ability to deliver more accurate results, inaccurate outputs remain a significant risk when using AI systems.

(2) Governmental entities:

(A) Must train their employees to understand the importance of verifying AI outcomes for accuracy;

(B) Must formalize processes for monitoring system accuracy before the deployment of an AI system and throughout its life cycle, as a system's accuracy may change over time; and

(C) Shall, when feasible, implement processes to improve the accuracy of AI systems by training the systems using human feedback or improving retrieval-augmented generation by ensuring the accuracy and relevance of the underlying data used by the tool to develop answers.

(f) Redress

(1) Providing a method for redress will promote public trust in both the AI system and in the entity that deploys it.

(2) Governmental entities:

(A) Must provide a mechanism to seek redress for those impacted when an AI system makes a consequential decision that unfairly impacts an individual or group in a material way;

(B) Must have a designated point of contact for individuals to address when seeking information about an unfair consequential decision; and

(C) Must develop internal procedures to allow employees to identify and remedy negative impacts caused by the use of AI systems.

(g) Transparency

(1) Establishing transparency for AI systems means providing information about the data, models, and outputs of an AI system to both the individuals interacting with the system and those deploying it. Strong transparency practices will build public trust in the AI systems governmental entities use.

(2) Governmental entities:

(A) Must collaborate with developers of AI systems and demand transparency to understand how a system operates, the source of the data the system was trained on, and its intended use cases;

(B) Must strive to understand the capabilities of the system and how it makes decisions;

(C) Must disclose when individuals interact with an AI system and when an AI system is used to make material decisions about their rights or access to governmental services; and

(D) Must never represent AI systems as human when interacting with the public.

(h) Data Privacy

(1) Governmental entities have a responsibility to protect the PII they collect and process about individuals, and both legal and ethical restrictions exist on what PII entities share with third parties. Data privacy principles likewise apply to the PII governmental entities process in and share with AI systems.

(2) The most effective method for protecting PII is through data minimization.

(3) Many AI systems rely on vast amounts of PII to make predictions and decisions. Sharing PII with an AI tool may violate privacy laws and obligations the entity has to the individual.

(4) Governmental entities:

(A) May collect and maintain only that PII needed for operations and must establish a process to delete PII consistent with records retention schedules and other legal requirements.

(B) Must strive to understand what PII the AI system uses, how that PII has been and will be collected, and how the tool uses, stores, and shares PII with third parties prior to using any government-held PII in an AI system;

(C) Must train employees about the risk of inputting sensitive or PII into publicly available AI systems that use inputs to train the model and share those inputs with other users of the AI system outside of the governmental entity; and

(D) Must strive to practice data minimization and ensure they abide by any purpose limitations granted when the PII was first collected, or as expressly allowed by law.

(i) Security

(1) AI systems are subject to security vulnerabilities. Common security concerns in the AI context involve data poisoning or malicious code injection, exfiltration of models or data within the AI system, and improper access controls that result in unauthorized access to the AI system itself. Secure AI systems will maintain the confidentiality and integrity of the AI system as well as the data it contains even when unexpected events or changes in their environment or use occur.

(2) Governmental entities:

(A) Must monitor, secure, and test AI systems to prevent or limit security attacks; and

(B) Must demand that AI system providers disclose known vulnerabilities and resolutions in a timely manner to the governmental entities deploying those systems.

(j) Accountability and Liability

(1) While governmental entities may delegate tasks and decision making to AI systems, the entities remain accountable for the decisions the AI systems make and the outcomes they produce. Use of AI systems for employment-related tasks or to make consequential decisions poses heightened risks.

(2) Governmental entities:

(A) Must provide training to employees on how to use AI systems in an effective, safe, and ethical way;

(B) Must ensure their vendors are contractually bound to these AI ethical principles and any relevant laws or regulations governing the use of AI systems; and

(C) Must ensure AI systems they deploy comply with the legal obligations they have at both the state and federal level.

(3) When deploying AI systems, governmental entities must establish appropriate retention schedules for the AI system's records and consider the Public Information Act implications related to the storage of data inputs and outputs.

(k) Evaluation

(1) AI systems can change over time, as can the purposes for which they are used.

(2) Governmental entities:

(A) Must establish methods for regular evaluation of AI systems to ensure the systems provide ongoing benefit to the populations they serve; and

(B) Must document such evaluations.

(l) Documentation

(1) Documentation provides a critical element for managing AI risk. Consistent documentation of preliminary assessments, ongoing monitoring and testing, and complaints provides governmental entities insight into the operations and improvements of their AI systems over their lifecycle. Documentation allows entities to evaluate the value of AI systems and determine where best to spend resources in further developing AI solutions.

(2) Governmental entities should maintain records of:

(A) The sources of data used in the AI system; and

(B) How the AI system is modified throughout the system's life cycle.

The agency certifies that legal counsel has reviewed the proposal and found it to be within the state agency's legal authority to adopt.

Filed with the Office of the Secretary of State on October 23, 2025.

TRD-202503820

Joshua Godbey

General Counsel

Department of Information Resources

Earliest possible date of adoption: December 7, 2025

For further information, please call: (512) 475-4531


SUBCHAPTER B. REQUIRED MINIMUM STANDARDS

1 TAC §§219.20 - 219.24

The new rules are proposed pursuant to Texas Government Code § 2054.052(a), which authorizes the department to adopt rules as necessary to implement its responsibilities under Texas Government Code Chapter 2054; Texas Government Code § 2054.703, which requires the department to establish minimum standards for heightened scrutiny AI systems; and Texas Government Code § 2054.702, which requires the department to establish an AI system code of ethics.

No other code, article, or statute is affected by this proposal.

§219.20. Statutory Requirement.

This subchapter establishes the minimum standards required by Texas Government Code § 2054.703 as enacted pursuant to Senate Bill 1964 of the Eighty-ninth Regular Session.

§219.21. AI Risk Management.

(a) A state agency or local government shall designate an employee as the AI Risk Officer.

(1) The AI Risk Officer is responsible for promoting ethical AI system procurement, development, deployment, and use within the state agency or local government, consistent with the AI Code of Ethics established by this chapter and the AI Risk Management Framework published by the National Institute of Standards and Technology.

(2) If a state agency or local government deploys a heightened scrutiny AI system, the AI Risk Officer is responsible for ensuring that the risk assessment is completed for that system. The AI Risk Officer shall evaluate the completed risk assessment and ensure that the heightened scrutiny AI system is deployed consistent with the minimum standards established by this chapter.

(3) In filling this role, the state agency or local government may employ an individual solely for this purpose or may add this responsibility to a current employee's existing job duties.

(b) A state agency or local government shall establish a process to identify and inventory all implementations of AI systems that qualify as heightened scrutiny AI systems.

§219.22. AI Risk Assessment for Heightened Scrutiny AI Systems.

(a) Before a state agency or local government develops, procures, deploys, or uses a heightened scrutiny AI system and at the time that a material change is made to the system, the state agency or local government shall conduct a written AI risk assessment to consider the probability and severity of harm that could occur as the result of implementation of the AI system.

(b) The risk assessment shall consider and document:

(1) The AI system's known security risks and mitigation steps available to limit those risks;

(2) The heightened scrutiny AI system's performance metrics, including:

(A) measurements of the accuracy and relevance of the system's outputs; and

(B) measurements of the operational aspects of the system, including model latency, uptime, and error rate; and

(3) The heightened scrutiny AI system's transparency, including information about:

(A) The system's algorithms and how the system makes decisions;

(B) The data used to train the system's model; and

(C) The availability of inputs and outputs to monitor the system's decision-making over time.

(c) When a state agency or local government is deploying any heightened scrutiny AI system, the AI Risk Officer shall:

(1) Review the completed written risk assessment prepared for that system prior to system deployment; and

(2) Approve or deny deployment of the system based on the risk and mitigation measures identified by the completed written risk assessment. At a minimum, the AI Risk Officer shall notify the state agency or local government's executive head or their designee of a decision to deploy a heightened scrutiny AI system. A state agency or local government may also establish a process for consultation or final approval by the executive head or their designee, as the state agency or local government determines appropriate.

(d) The state agency or local government shall maintain a record of the completed written risk assessment and all relevant documents for as long as required by the applicable state records retention schedule.

§219.23. AI Impact Assessments

(a) A state agency that deploys or uses a heightened scrutiny artificial intelligence system or, at the request of a contracting state agency, a vendor that contracts with the state agency for the deployment or use of a heightened scrutiny AI system shall conduct an impact assessment:

(1) prior to deploying the heightened scrutiny AI system; and

(2) at the time any material change is made to:

(A) The system;

(B) The state or local data used by the system; or

(C) The intended use of the system.

(b) A heightened scrutiny AI system impact assessment required by this section must include:

(1) A description of the system, including its training data, model, and intended use;

(2) How the agency will use the system and who within the state agency is responsible for its deployment and ongoing monitoring and evaluation;

(3) Whether the system will process or store any PII provided by the agency or the users of the agency's system and, if so, whether the system will use that information to train the model;

(4) Potential risks of unlawful harm that the agency identifies and steps the agency can take to limit those risks;

(5) System limitations identified by the agency;

(6) How the agency will monitor the system outputs to evaluate accuracy and harm and identify the intervals at which the monitoring will occur; and

(7) The retention duration for the system's inputs and outputs and the method for deleting outputs once the identified retention period has passed.

(c) A state agency or a vendor contracted by the state agency shall either develop its own AI impact assessment for heightened scrutiny artificial intelligence systems or use the standard impact assessment form created by the department.

(d) A state agency that deploys or uses a heightened scrutiny artificial intelligence system shall maintain a record of the AI impact assessment for as long as required by the applicable state records retention schedule.

(1) If a vendor that contracts with the state agency for the deployment or use of a heightened scrutiny AI system conducts an impact assessment, the state agency must receive a copy of the completed impact assessment, which is treated as the document of record.

(2) The state agency shall make a copy of the assessment available to the department on request.

(e) Local governments shall:

(1) Comply with all requirements established by the department regarding the procurement of information resources and information resources technology that include a heightened scrutiny AI system;

(2) Consider conducting an impact assessment in alignment with the state agency requirements established by subsections (a)- (c) when deploying a heightened scrutiny AI system; and

(3) Review relevant resources posted by the department on its website.

§219.24. Guidelines for Frameworks, Policies, and Trainings.

(a) This section establishes the guidelines required by Texas Government Code § 2054.703(b)(4) as enacted pursuant to Senate Bill 1964 in the Eighty-ninth Regular Session.

(b) When a state agency or local government deploys or uses a heightened scrutiny AI system, they must identify the acceptable use cases for such system, identify its limitations, and adopt an acceptable use policy to prevent uses other than those approved by the agency for the heightened scrutiny artificial intelligence system. All employees must be adequately trained on the acceptable use policy.

(c) A state agency or local government that deploys or uses a heightened scrutiny AI system shall provide employees or contractors who access, use, or manage the heightened scrutiny AI system with training regarding identified risks and appropriate methods for mitigating those risks.

(d) A state agency or local government that contracts with vendors to deploy a heightened scrutiny AI system shall mitigate third party risk by contractually requiring those vendors to implement the AI Risk Management Framework published by the National Institute of Standards and Technology for heightened scrutiny AI systems.

The agency certifies that legal counsel has reviewed the proposal and found it to be within the state agency's legal authority to adopt.

Filed with the Office of the Secretary of State on October 23, 2025.

TRD-202503821

Joshua Godbey

General Counsel

Department of Information Resources

Earliest possible date of adoption: December 7, 2025

For further information, please call: (512) 475-4531