Agenda
Opening speaker: Alexandra Reeve-Givens, CEO, Center for Democracy and
Technology
What are the major categories of risk posed by AI in state governmental
applications? How is the federal government approaching risk management?
Panel Discussion: Perspectives from State Leaders
Katy Ruckle, Chief Privacy Officer, Washington Technology Solutions
Andrew Wheeler, Director, Office of Regulatory Management, Virginia
Audience Questions, Moderated Discussion with all Speakers and Panelists
Advancing Responsible AI:
Opportunities for States
December 5, 2023
Alexandra Reeve Givens
President & CEO, Center for Democracy & Technology
Center for Democracy & Technology
CDT is a nonprofit, nonpartisan
organization based in Washington
D.C. and Brussels.
We fight for technology to support
public good while protecting
against invasive, discriminatory
and exploitative uses.
We:
advocate for sound laws &
policies
advise companies &
government on responsible
tech use and design.
Agenda:
1. What types of risks?
2. Elements of trustworthy AI
3. Steps executive agencies can take:
Government’s own use of AI
Responding to harmful uses of AI
The News Cycle About Future AI Harms:
Security & Surveillance
Facial recognition
Predictive policing
Risk assessment in
bail & sentencing
Education
Student activity
monitoring
Remote exam
proctoring
Personalized learning
Consumer Fraud &
Abuse
Fraud
Cybersecurity
Extortion
Sexual content
Commercial Data Practices
Employment
Housing
Lending
Insurance
Ad Targeting
Benefits & Public
Health
Eligibility
determinations
Fraud detection
Medical research &
spending
Information Harms &
Elections
Deepfakes
Voter suppression
Targeting & filter bubbles
The Reality of Current AI Harms:
References:
Blueprint for an AI Bill of Rights (Oct. 2022)(Appendix)
OMB Draft Guidance for Federal Agencies Use of AI (list of presumed
rights- and safety-impacting uses)
1. Example: AI & Public Benefits
2. Other Areas
Criminal Justice
System
Policing
Hiring & workplace
Housing
Credit
Elements of Trustworthy AI
1. You Should Be Protected From Unsafe Or
Ineffective Systems
2. You Should Not Face Discrimination By
Algorithms And Systems Should Be Used And
Designed In An Equitable Way
3. You Should Be Protected From Abusive Data
Practices Via Built-In Protections And You
Should Have Agency Over How Data About
You Is Used
4. You Should Know That An Automated System
Is Being Used And Understand How And Why It
Contributes To Outcomes That Impact You
5. You Should Be Able To Opt Out, Where
Appropriate, And Have Access To A Person
Who Can Quickly Consider And Remedy
Problems You Encounter.
AI Risks & Trustworthiness
3.1 Valid & Reliable
3.2 Safe
3.3. Secure & Resilient
3.4 Accountable & Transparent
3.5 Explainable & Interpretable
3.6 Privacy-Enhanced
3.7 Fair, with Harmful Bias Managed
The Role for States: Government Use of AI
1. Mandate Risk Management Practices
Determine if rights or safety impacting
Require minimum practices:
Complete AI impact assessment
Test performance in real-world context
Independently evaluate the AI
Conduct ongoing monitoring & threshold for human review
Ensure adequate human training
Additional minimum practices for rights-impacting uses:
Test for equity & nondiscrimination (pre & post deployment)
Consult impacted groups
Notify impacted individuals at time of encounter
Human consideration & remedy; opt-out
Reference:
OMB Proposed
Memorandum for
Federal Agency Use of
AI (Nov. 1, 2023)
Comply with due process & APA requirements!
The Role for States: Government Use of AI
2. Require Reporting & Documentation
Direct agencies to inventory their uses
Issue templates for reporting (internal & public)
References ct’d:
VT AI Inventory (Dec
2022)
CA Executive Order
(Sep. 2023)
VA Executive
Directive (Sep. 2023)
WA Tech Gen AI
Guidelines (Sep. 2023)
3. Designate Appropriate Staff; Equip for Success
Chief AI officers
Provide teams with relevant expertise
Guidance; templates; working groups; other support
4. Take Specific Steps on Procurement
Guidance, templates, staffing support
Ensure sufficient control & ownership over
data, data improvements, & procured systems
Ensure quality control, privacy & security!
The Role for States: Countering Harmful AI Uses
1. Combatting Fakes & Providing Authoritative Information
Government officials must act to protect their role as trusted
sources of civic information.
Not an
information
crisis
An
information
crisis
consistent branding & trust indicators; e.g. .gov domains
proactive messaging & “pre-bunking” false narratives
establish trusted channels for communication
The Role for States: Countering Harmful AI Uses
2. Take AI-driven harms seriously!
Ensure law enforcement is equipped to address
consumer fraud, extortion, NCII, election interference
Address critical infrastructure & cybersecurity risks
Direct housing / civil rights / consumer protection
teams to issue guidance & bring cases
Any AI funds must require responsible innovation
Advance a strategic legislative agenda
Alexandra Givens
agivens@cdt.org
Katy Ruckle, State Chief Privacy Officer
Position created Washington law
Privacy Principles
Projects that involve personally identifiable information (PII)
Data Protection
What is CPO role in relationship to AI?
Automated Decision Systems Work
Generative AI
State Chief Privacy Officer
16
17
Governance Structure
Representation from WaTech, State Agency, and Local Government
Steering Committee Objectives
Develop a set of guidelines and policies
Identify and document best practices
Establish a governance structure and develop mechanisms for
accountability and oversight
Document use cases and examine potential societal impact
Facilitate collaboration and knowledge sharing
Promote alignment of new AI technologies to business and IT
strategies
AI CoP
AI Steering
Committee
Subcommittees
Community
of Practice
(CoP)
More data is
needed to:
18
Build AI
Train AI
Maintain AI
Risk of
Data persistence
Data repurposing
Data spillovers
Data commingling
Data integrity
What is the issue with more data from a privacy perspective?
19
20
Washington State Agency Privacy Principles
Lawful, fair, & responsible use
Data minimization
Purpose Limitation
Transparency & accountability
Due diligence
Individual participation
Security
Washington State Agency Privacy Principles
Interim Guidelines for Purposeful and
Responsible Use of Generative Artificial
Intelligence
Background
Definition
Principles
Guidelines
Generative AI Usage Scenarios and Dos
and Don’ts
Use Cases
Acknowledgments
https://ocio.wa.gov/policy/generative-ai-guidelines
21
Safe, secure, and resilient
Valid and reliable
Fairness, inclusion, and non-
discrimination
Privacy and data protection
Transparency and auditability
Accountability and
responsibility
Explainable and interpretable
Public purpose and social
benefit
22
Guiding Principles
Guidelines for
Generative AI Use
Fact-checking, Bias Reduction, and Review
All content generated by AI should be reviewed and fact-checked, especially if
used in public communication or decision-making.
State personnel generating content with AI systems should verify that the
content does not contain inaccurate or outdated information and potentially
harmful or offensive material.
Given that AI systems may reflect biases in their training data or processing
algorithms, state personnel should also review and edit AI-generated content
to reduce potential biases.
When consuming AI-generated content, be mindful of the potential biases and
inaccuracies that may be present.
Guidelines
24
Disclosure and Attribution
AI-generated content used in official state capacity should be clearly
labeled as such, and details of its review and editing process (how the
material was reviewed, edited, and by whom) should be provided. This
allows for transparent authorship and responsible content evaluation.
State personnel should conduct due diligence to ensure no copyrighted
material is published without appropriate attribution or the acquisition of
necessary rights. This includes content generated by AI systems, which
could inadvertently infringe upon existing copyrights.
Guidelines
25
Sensitive or Confidential Data
Agencies are strongly advised not to integrate, enter, or otherwise
incorporate any non-public data (non-Category 1 data) or information into
publicly accessible generative AI systems (e.g., ChatGPT).
If non-public data is involved, agencies should not acquire generative AI
services, enter into service agreements with generative AI vendors, or use
open-source AI generative technology unless they have undergone a
Security Design Review and received prior written authorization from the
relevant authority, which may include a data sharing contract.
Contact your agency’s Privacy and Security Officers to provide further
guidance.
Guidelines
26
RCW 42.52.050
(3) No state officer or state employee may disclose confidential
information to any person not entitled or authorized to receive the
information.
Definitions (RCW 42.52.010):
(5) "Confidential information" means (a) specific information, rather than generalized knowledge, that
is not available to the general public on request or (b) information made confidential by law.
(15) "Person" means any individual, partnership, association, corporation, firm, institution, or
other entity, whether or not operated for profit.
State Ethics law Confidential Information
27
Guidelines
Generative AI Usage
Scenarios
Do's and Don'ts
Rewrite documents in plain
language for better
accessibility and
understandability.
Do specify the reading level in the
prompt, use readability apps to
ensure the text is easily
understandable and matches the
intended reading level, and review
the rewritten documents for biases
and inaccuracies.
Condense longer documents and
summarize text.
Do read the entire document
independently and review the
summary for biases and
inaccuracies.
Do's (best practices) and x Don’ts (things to avoid)
29
x Don’t include sensitive or confidential information in the prompt
Do edit and review the
document, label the content
appropriately, and remember
that you and the state of
Washington are responsible and
accountable for the impact and
consequences of the generated
content.
X Don’tinclude sensitive
or confidential information in the
prompt or use generative AI to
draft communication materials
on sensitive topics that require a
human touch.
Draft Documents
30
Do's (best practices) and Don’ts (things to avoid)
Do understand what the code is
doing before deploying it in a
production environment,
understand the use of libraries and
dependencies, and develop
familiarity with vulnerabilities and
other security considerations
associated with the code.
X Don’tinclude sensitive or
confidential information (including
passwords, keys, proprietary
information, etc.) in the prompt
and code
Aid in Coding
31
Do's (best practices) and Don’ts (things to avoid)
Do review generated content
for biases and inaccuracies and
engage with your
communication department
before using AI-generated
audiovisual content for public
consumption.
x Don’t include sensitive or
confidential information in the
prompt.
Aid in generating image, audio, and video content for more
effective communication
32
Do's (best practices) and Don’ts (things to avoid)
Do implement robust measures
to protect resident data.
X Don’t use generative AI as a
substitute for human interaction or
assume it will perfectly understand
residents’ queries. Provide
mechanisms for residents to easily
escalate their concerns or seek
human assistance if the AI system
cannot address their needs
effectively.
Automate responses to frequently asked
questions from residents (example: chatbots)
33
Do's (best practices) and Don’ts (things to avoid)
Use Cases
34
Other data and privacy considerations
for Generative AI?
Where did the
training data
come from?
Was the training
data legally
obtained?
Data being used
as a proxy for
something else?
SSB 5116 (2021) - Establishing
guidelines for government procurement
and use of automated decision systems
in order to protect consumers, improve
transparency, and create more market
predictability.
Artificial Intelligence Regulation in Washington
36
#1 Prioritization of Resources
#2 Procurement
#3 Evaluation of Existing Systems
#4 Transparency
#5 Determination on Whether to Use
System
#6 Ongoing Monitoring or Auditing
#7 Training in Risk of Automation Bias
2021 ADS Workgroup Report
2021 Report & Recommendations
37
Questions?
privacy@watech.wa.gov
38
AI Resource List
Please see webinar Chatbox for a link to the list
Includes:
o Federal level activities
o State activities: Executive Branch and Legislative
o Local activities
o Technical assistance tools
Contacts
Kate Stoll, AAAS EPI Center: kstoll@aaas.org
Sally Rood, NGA Center for Best Practices: srood@nga.org
Ryan Martin, NGA Center: rmartin@nga.org