Exploring AI: Survey Trends, Lessons Learned, and Governance
The latest TL;dr shares curated insights from three handpicked articles on Artificial Intelligence
Welcome to the latest TL;dr, brought to you by Good Government Files. Today, we separate wheat from chaff in a few articles on the topic du jour, Artificial Intelligence.
We're all busy, and sometimes we just need the takeaways. Let's jump right in.
Local Gov IT Leaders Surveyed on Impact of AI
I love me a good survey, and Route 50 summed up the results of a Public Technology Institute survey of state and local gov IT executives on all things AI.
Rapid AI Adoption in Local Government: It’s happening, people. Nearly 40% of respondents said their IT department is currently involved in an AI project. Applications include cybersecurity, infrastructure monitoring, public health, traffic flow analysis, workflow automation, customer service, and website management.
Urgent Need for AI Training: 85% of respondents believe they need better training to understand AI. Boston and San Jose are already doing so. Moreover, 51% of respondents believe AI training for local government staff should be mandatory within the next two years on issues like addressing potential bias, ethical use, contract modifications, data sharing, personal information protection, and the development of processes to manage AI decisions.
Governance and Guidelines: The absence of a national AI framework has led several state and local governments to establish their own guidelines and policies for AI use. These initiatives emphasize the importance of responsible AI adoption. GGF covered this very topic last month. Additionally, the survey underscores the belief that AI can significantly enhance cybersecurity management in local government operations.
Here’s the complete survey report. Regarding point No. 2, SGR1 is offering free monthly webinars on AI by Micah Gaudet, deputy city manager in Maricopa, Arizona.
Lessons Learned After Working with AI for 11 Months
Writer Alexandra Samuel shares in the Wall Street Journal how working with generative AI “has profoundly changed the way I work, what I work on and, increasingly, how I think.” Three takeaways:
Utilize AI for Rapid Prototyping: AI, like ChatGPT, can be used as a tool for quick prototyping and idea validation. This approach helps in determining whether an idea is worth pursuing further, saving time and effort on unproductive projects. Samuel also notes “you get the best results from AI when you treat it as a tutor, rather than expecting it to do your homework for you.”
Challenge Your Limits with AI: Samuel has redefined her professional and personal limitations by being open to trying new things. By seeking AI assistance in various areas, such as software development or data analysis, users can expand their skillsets and overcome self-imposed limitations. Instead of saying: “I can’t,” Samuel encourages adopting a mindset of “I haven’t ... yet.”
Get Great at Prompts: When asking ChatGPT for assistance, Samuel writes “it’s helpful to work from the assumption that you’ll need some back-and-forth before you get a useful answer or output.” She gives the example of crafting the subject line for a newsletter. Don’t just ask for one. Instead, ask for 10 draft subject lines. She can then give it very specific feedback based on that first set: “Options 1, 3 and 6 are too dull; 4 and 5 are good but too much jargon; 7 and 9 are the best because they’re short and have some personality; 8 is good but is an inappropriate double entendre. Give me 10 more like 7 and 9, steer clear of jargon.” She says she gets to a great result much more quickly.
A Prescription for Smart Regulations for these Smart Tools
Former CIA officer, congressman and brief GOP presidential candidate Will Hurd, previously on the board of OpenAI, wrote an essay in Politico exploring governance and ethical concerns surrounding artificial general intelligence (AGI).
Governance of AGI: There is a need for robust governance to ensure AGI is developed and used responsibly. Hurd highlights the importance of addressing governance questions such as who can be trusted to develop it, who should be entrusted with the tool once it’s created, and how to ensure AGI is a force for good rather than a potential catastrophe. During his time on the OpenAI board, Hurd wrote “a supermajority of our conversations focused on safety and alignment (making sure AI follows human intention).”
Mandating Legal Accountability: Hurd says legal accountability should be mandated for AI tools, ensuring they abide by existing laws and regulations without special exemptions. He notes the current landscape “consists of a fragmented array of city and state regulations, each targeting specific applications of AI.” Not good. Just because they’re new and supercool doesn’t mean AI companies should get a pass from Congress, which carved social media out of the regulatory rules that radio, TV and newspapers must follow. “We can’t make the same mistakes with AI,” he writes. Amen and amen.
Protecting Intellectual Property (IP) and Implementing Safety Permitting: Hurd advocates for protecting IP by compensating creators when their data is used in AI-generated content, similar to royalties for traditional content creators. Additionally, he proposes implementing safety permitting for powerful AI models, requiring them to meet safety standards before release, akin to permits needed for building nuclear power plants or other critical infrastructure.
He concludes: “But beyond regulations, it requires a shared vision. A vision where technology serves humanity and innovation is balanced with ethical responsibility. We must embrace this opportunity with wisdom, courage and a collective commitment to a future that uplifts all of humanity.”
In Other’s Words
We leave you with a couple of mindful thoughts that have nothing to do with AI, but everything about being a good human being.
You should always be rooting for the people you know. Not only because you may need their support tomorrow, but also because it feels good to celebrate something.
Celebration can rescue your day — even if it is someone else’s victory. Envy will ruin your day — even if you’re actually winning.
— James Clear, the author of Atomic Habits2
Onward and Upward.
Full disclosure: I do occasional consulting work for SGR.
Paid link. As an Amazon associate I earn from qualifying purchases.
Will,
I value your synopses and support the vast majority
of the developments you describe so well.
I have written in previous comments
about an aspect of AI that I do not support,
and ask your indulgence as I hold forth again here.
As a writer, my primary concern about AI
is its misuse by writers,
and its misuse replacing writers.
AI cannot write.
It can only manufacture word sequences.
That is not writing!
But eager AI users make no distinction
between real writing and manufactured text.
Do city governments?
Do they even draw that line, let alone hold it?
Especially where it is VITAL that it be held?
I have no problem with many of the practical uses of AI.
But I object to its misuse by writers
in such ways as illustrated in Lesson 3 above.
It is a "small" but quite revealing misuse.
She abandoned the challenge of being a writer,
which includes creating their OWN subject lines
Instead, she pulls out AI as her mechanical robotic crutch.
She lets it save her from the trouble of genuine creation.
Ah!
Now I have manufactured words to choose from!!!
Now I can splice them together however I choose!!!
No one will object No one will know, nor care.
Alas,
every time she uses her AI crutch
her capacity to write will atrophy, not improve.
And she will have lost her foundation.
her basic grounding:
the vital core perception and awareness
of what it actually MEANS to write.
And what is that?
To write is to bring forward words of truth
that emanate from your own human God-created soul.
AI will never ever be capable of doing that.
AI cannot write,
and must never be described as doing so.
We must recognize this vital fact.
We must (like Dylan Thomas in his poem Do Not Go Gentle)
rage against the dying of the light:
the light of true and genuine writing.