AI in College: A Tool Without Training

Pam Sornson, JD

May 7, 2024

Artificial Intelligence (AI) is permeating every corner of society and has been for a few years. That reality is driving demand for AI technicians and programmers in almost all industries. Unfortunately, while the use of AI is accelerating, the development of a well-trained AI technical workforce is lagging far behind, and companies are clamoring for workers who can help them achieve their AI goals. In many regions, including California, the local community colleges are often seen as ground zero for the generation of new educational paths to AI comprehension and mastery.

 

‘Use’ Does Not Equal ‘Understanding’

For many AI users, early engagement with the advanced technology didn’t even register as an event. As organizations embraced and embedded into their services the cost savings of chatbots for customer service and robots for manufacturing, their consuming public was left out of that adoption information loop. Instead, those customers and colleagues simply continued to engage with the entity, unaware that they were now working with a digital program and equally unaware of the technological advances being made behind the scenes.

That reality reveals the challenge that many enterprises are dealing with today: using the technology is – at least for the moment – infinitely easier than introducing it to existing systems and workflows. AI’s architecture is different from traditional computer programs and apps and, in many cases, renders those legacy activities obsolete. Consequently, many company owners, agencies, and organizations are seeking resources to assist them in designing and implementing new AI systems into their existing infrastructures and then maintaining those capacities over time. In short, they are looking for an AI-savvy workforce that does not yet exist.

 

New Programs Require New Practices

America’s higher education system also faces a shortage of AI instruction courses. While many schools currently use the technology in at least some of their departments and divisions, there are very few training opportunities available to ensure new users can master and optimize their use of the latest digital tools. One researcher clarified the extent to which both K-12 and higher education schools use AI now and provide training for it:

Only one in four survey respondents acknowledge that their school has put intentional limits on the use of AI (specifically ‘generative’ AI) by its educators and staff. Eighteen percent (18%) report that their school is working to identify appropriate use cases for AI applications, while 5% say its workforce can use the software for limited purposes. Two percent (2%) report that their institution bans the use of the technology by teachers and staff altogether.

Of those schools that have adopted the tech, more than one in three (37%) reported that they didn’t know how it was being used on campus. Less than a third could articulate how it was being implemented at their schools:

Teaching respondents were using AI to plan lessons (34%), create assignments (30%), create course content (29%), and design coursework (18%).

One in five administrators (20%) use AI to automate or streamline routine activities.

Perhaps most notably, almost one in three respondents (32%) reported that their school wasn’t offering any AI training or, if it was, they didn’t know about it. Further, while 46% of responding schools were offering AI training to their students, only 37% were providing training for their educators, and 27% were providing training for their staff.

The survey results indicate that learners, teachers, and administrators are using AI more frequently in America’s school systems, even though they’ve not been trained on how to implement or manage it. Without that knowledge or skill set, users – and their schools – risk making errors they do not understand that can cause consequences they can’t anticipate.

 

New Practices Require New Skills

At the same time, across the country, few educational resources are available that provide credentialed AI design and development training. Rather than develop such a program designed to teach AI theory, design, and practice, most schools are simply adopting the AI programming already available through today’s immense digital platforms. And while most standardized AI systems function well for their purpose, they are not designed to address the needs and demands of specific user groups or populations. Instead, what is needed are the introductory and foundational AI courses that generate the AI-based knowledge, comprehension and skill sets that are now so highly in demand.

Two large digital platforms have been actively engaged with schools to develop specialized, AI-focused courses and programs designed to meet both academic and industry standards.

The first ‘official’ AI degree, an associate degree in machine learning and AI, was launched in 2020 through a partnership between Arizona’s Maricopa Community College and the  Intel Corporation. Intel adapted its European ‘AI for Youth‘ syllabus as per the specific needs of those community colleges, assisted in training staff and faculty, and was instrumental in its adoption by the schools.

Amazon’s AWS is also actively engaging with academia to both advance AI resources in higher education and overcome inequities found in those systems. Its Machine Learning University has been launched in several minority-serving institutions (MSIs), community colleges, and historically Black colleges and universities (HBCUs) to teach database, machine learning (ML), and AI concepts. The first step in its training program is to educate the educators on the science behind and practices involved in artificial intelligence programming.

At present, California’s UC and CSU systems offer only a few master’s degrees related to AI development; there are no associate or bachelor’s degree paths at present within the state.

 

AI has erupted into the general public’s consciousness without the slow and measured ‘introduction-education-implementation’ process that typically precedes a whole-community transition from one system to the next. However, the speed at which the tech has been adopted has also eroded the opportunity to ensure appropriate and ethical use of the new digital assets. Further, the unique skills and digital insights needed to design an AI infrastructure are complex and must be mastered if the newly generated system is to replace legacy technology while maintaining or enhancing productivity.

AI in College: The Advent of AI Education

Yes, Artificial Intelligence (AI) is everywhere these days, sometimes visible, most times not. Learning that it is ‘out there’ and in use is one thing; most people can now discern when they’re interacting with a robot, and they probably recognize that many of the appliances they use are built by robots in factories.

However, to ensure a full embrace of the burgeoning digital opportunity, those same people must learn a whole new bag of computing tricks, as do the systems with which they work. The challenge they face is that there is a woeful lack of training available that teaches anyone the basics of AI (how it works, how to set it up, how to manage it), let alone goes further to teach how to use it in commercial settings to enhance profitability and efficiency. Detecting, containing, and recovering from fraudulent AI applications is another educational avenue that is yet to be built. Developing a workforce capable of meeting these emerging demands is the next frontier in public education at all levels.

Fortunately, America’s community colleges are taking on the exercise of teaching the masses about AI’s benefits and threats. In California, these higher education schools are gearing up their resources to become the AI training centers of the future, which is, in this case, tomorrow.

 

AI Training – Sooner is Better than Later

Perhaps the biggest challenge involved in developing a comprehensive AI curriculum is that there are few readily accessible guidelines or parameters on how to do that. The adoption of the technology has been swift, growing much faster than the adjunct education resources needed to train users on its safe deployment or management. That education gap is one reason why it poses as much threat as it does benefit:

AI is already enhancing criminal activities (deepfakes, phishing, and malware, to name just three), and its potential for illicit use is as great as it is for positive advancements.

In February, the US Federal Communications Commission (FCC) made AI-generated, automated ‘robocalls’ illegal because of their capacity to extort money from unsuspecting consumers.

Its unfettered use in public systems presents an immense problem. Already embedded in financial services, public safety, and infrastructure systems, AI programming poses threats to national and global power grids, logistics networks, and military strategies if utilized for nefarious purposes.

So far, only the European Union has introduced AI standards (EU AI) that are directed (like its General Data Protection Regulation – the GDPR) at ensuring the technology remains safe and trustworthy. America’s government leaders have yet to publish a formal or unified set of standards similar to those of the EU (although many government agencies are working on their own AI concerns), and no entity has yet established an educational approach to AI usage that comprehensively encompasses all of its aspects and opportunities. There are, however, schools around the country that have been working on this issue for some time and have insights to offer anyone investing in building a college-level program for AI programming.

 

Embracing AI Foundations

Community colleges in ArizonaTexas, and Florida have recently unveiled their AI courses for both associate and bachelor’s degrees, which they revealed at the American Association of Community Colleges (AACC) annual convention in April. Maricopa Community College was the first school to launch an associate certification in AI and machine learning (ML) in 2020, with the help of Intel Corporation. Intel adapted its European-based “AI for Youth” program to suit the needs of an American community college, and it was that foundation that gave rise to the new educational opportunity.

The AACC conference also provided a platform for schools and AI experts to collaborate on what AI training could look like and, perhaps more importantly, what not to do when developing such a program.

DO reach out to AI industry leaders, such as Intel, Microsoft (which is already working with the Los Angeles Community College District (LACCD)), and Amazon (which developed its Machine Learning University—MLU—specifically for community colleges).

DO look to local companies and businesses as collaborators for training specifics. They, too, are facing the AI juggernaut and need a workforce resource to help them master the new capacities.

DON’T stay focused on budget. Yes, the needed updates and upgrades will come with added costs; however, the long game suggests that the investments will result in improved processes across all college sectors and, especially, in the local economies.

DON’T leave faculty training out of the mix. Too many schools focus on training students but miss the opportunity to upskill their in-house teachers and faculty members.

One of the most important notes developed during the AACC discussion: be aware of adult learners. Thousands of well-employed workers and professionals need AI upskilling, too, not just the typical community college youth population. Serving these groups will require modifying course times, adding online options, and flexibility in program parameters.

 

California’s Higher Education AI Initiatives

California’s community colleges are also focused on AI developments within their spheres of influence. The California Community College Chancellor’s Office (CCCCO) is working with leaders across the state to understand the scope of the programming and how it already impacts each school. In addition to ensuring the software provides educational benefits (individualized learning options and enhanced research and creativity opportunities), schools are also concerned about reducing the potential for academic dishonesty, which erodes the quality of any education.

Introducing new AI-focused programs into the California Community College system requires a formal adoption of the ‘discipline’ by the organization’s Academic Senate (ASCCC). In late September 2023, that entity received a formal “Revisions to Disciplines List Form” submission to add “Artificial Intelligence” as a new discipline in higher education. The submission lays out the need for such a revision:

The submission form notes that there are only six Master’s in AI and ML degrees available in the UC (Santa Cruz, Los Angeles, and Riverside) and CSU (SanFrancisco, San Jose, and Los Angeles) systems, but no bachelor’s or associate’s degrees are offered anywhere.

There is currently an ‘undersupply’ of AI workers across the state. There is a projected gap of over 20,000 AI-focused workers annually in the Bay area alone.

Ten letters of support accompanied the submission, issued by one California State University (San Jose), five community colleges (Santa Ana, Mission, Evergreen, San Mateo, and Folsom), three industry partners (Intel Corp., Amazon Web Services, and Sustainable Living Lab), as well as personal endorsements from the North and Far North Regional Consortiums.

With demand for AI-trained workers rising in all sectors, a well-thought-out educational program to build that workforce is needed.

Despite its prevalence in so many of life’s arenas already, AI remains an unmanaged, misunderstood resource that can cause significant harm and also great good. It is critical that communities invest in building the educational resources needed to control it and optimize its use for appropriate purposes. It’s gratifying to see how America’s schools and businesses are partnering to provide their communities with that resource.

AI in Education: From Peril to Promise

Pam Sornson, JD

April 16, 2024

One unexpected benefit arising out of the COVID-19 pandemic is the global embrace of technology to further goals and improve systems. During its acute phase – after the identification of the coronavirus and before comprehensive vaccination strategies reduced its lethality – many businesses and even whole industries adapted their internal processes and external outreach to accommodate responsive digital resources. The world’s education systems were leaders in these endeavors, and many schools were able to maintain (although not fully match or achieve) presumed educational trajectories despite the virus’s many interruptions and disruptions.

To a certain extent, the uncomfortable experience of transitioning from classroom to computer teaching and learning introduced many teachers, students, and institutions to the values and benefits of digital educational tools. The new ‘digitally enhanced’ educational processes offered ways to improve courses and programs while also giving learners more tools with which to – well – learn.

As Artificial Intelligence (AI) technology gains ever more educational territory as a valued asset for both sides of the teaching process, each can benefit from its expansive capacities, just as they did as they transitioned from a campus-based educational platform to a virtual or hybrid one.

 

From Chaos to Creativity

While it seems that AI has just recently burst onto the global stage, it’s actually been working behind the scenes for many years. However, the more recent advent of Generative AI as a consumer-directed tool has stirred a global frenzy, as programs like ChatGPT offer their users unique and never-before-seen production and creativity options.

The higher education community has been particularly interested in AI, looking at how it can enhance their efforts and those of their students and stakeholders. They’re also very cognizant of the potential threat the programming presents to their systems and tried-and-true practices. Accordingly, schools across the country have been evaluating the presence of AI on their campuses and making decisions based on how it’s currently deployed and how they can better master its resources going forward.

 

Early Adoptions

According to a 2023 survey by EDUCAUSE, a non-profit organization dedicated to evaluating the use of data and technology to ‘further the promise of higher education,’ the higher ed community is now busy exploring how the technology is used, can be used, and should be used. Survey responses indicate that while most schools (76%) are tuning their current AI efforts to improve the student experience, many are also exploring how to implement AI capacities into other educational processes.

As a Strategic Planning Tool:

As a strategic planning tool, AI is now almost ubiquitous on higher ed campuses, with a full 89% reporting that their institution is actively using it to plan its next steps. Schools are concerned about maintaining an ‘AI status quo’ with their students who have already adopted the technology. They’re also unnerved by the potential for inappropriate use that practice might entail. They do, however, embrace it as a strong supplement to their student services capacities. Many have already begun operationalization of AI goals by implementing training classes for faculty (56%), staff (49%), and students (39%).

As a Leadership Tool:

In general, most higher ed leaders’ opinions about AI mirror those of the general public; they are ‘cautiously optimistic’ about its successful implementation as a school-wide asset. More than half (56%) are now actively engaging with AI within their roles, and most believe that almost all functional elements of their administrations have or will have responsibilities to adopt and manage AI strategies within their sectors.

Survey responses also indicate more work needs to be done. Many leaders report being unaware of what others are doing regarding AI adoption, both on their campus and around their region. Institutional silos may be the cause of the gaps.

As an Impact on the Institution:

Almost across the board, AI processing is changing how schools go about their business:

Teaching:

Almost all respondents (95%) reported that AI is already having a notable impact on teaching and learning, indicating that both instructors and students are seeing evidence of the technology in many of their classroom activities.

More than three in four (78%) respondents acknowledged that technology ‘has impacted academic integrity.‘ Teachers have begun questioning whether some of the work submitted by their students is authentic and original because the text does not conform to the learner’s typical writing style and proficiency.

 

Continuing Concerns

Schools are also recognizing and experiencing the challenges AI presents, especially regarding information security.

Four in five (79%) respondents are concerned about the impact of AI on cybersecurity systems, while 72% see its threat to data privacy.

Three in four (74%) are concerned that inappropriate AI usage may violate federal regulatory compliance, while more than half (56%) share similar concerns regarding local compliances and ethical data governance practices.

More than half (52%) report concerns about AI and its capacity to amplify the embedded biases that exist in current databases.

Many also reported the misuse of the technology, noting papers submitted that clearly hadn’t had human oversight in their creation and had not disclosed their AI origin or resource.

 

Overall, the survey reveals that AI is swiftly entering the higher education sector as both a supportive tool (for teaching, learning, and strategy) and a threat (impacting data privacy, integrity, and security). Responses present a mixed picture of its impact on higher education institutions. Gains are being made in school productivity and student connections, while concerns remain regarding the technology’s capacity to erode academic integrity and information, institutional, and personal security systems. And notably, despite its growing presence on campus, only a few respondents (11%) had seen new AI-related jobs emerge or existing jobs be modified (14%) to reflect the technological advances.

Just as the pandemic rolled unimpeded over the world, AI is spreading – mostly uncontrolled – into all corners of society. Like all industries, the higher education sector must also expend the resources needed to harness its promise and quell its threat if it wants the technology to enhance its future and that of its stakeholders.

AI in Education: Addressing Obstacles to Open Opportunities

Pam Sornson, JD

April 16, 2024

Perhaps the most prominent concern raised by the use of Artificial Intelligence (AI) and Generative Artificial Intelligence (GAI) is the potential threat of ‘technological unemployment.’ The issue stems from the very real possibility that emerging digital resources have the capacity to eliminate jobs currently held by humans. The work would be transferred to robots, automated machines that are capable of performing the activities faster, more efficiently, and more profitably. Both employers and industries are interested in exploring any labor-related option that promises to increase productivity without also unnecessarily increasing expenses. That reality is driving the current uptick in AI technology investments, especially in the healthcare, finance, energy, and agriculture industries. Workers in almost all sectors are rightfully concerned that their occupation might be one that becomes obsolete due to the adoption of technological tools.

However, many economic experts do not believe that GAI will cause widespread job layoffs and high, sustained unemployment. Instead, they posit that the newly embraced digital agency will trigger the development and growth of as-yet unidentified occupations and industries, which has been the trajectory of the global workforce through each of the past three ‘industrial revolutions.’ Their premise is that the future looks different – not worse – because of GAI and that the community has the tools in place to ensure that the changes triggered by GAI bring positive economic development, not certain financial ruin for millions of workers.

 

AI as a Workforce Disruptor

The phrase ‘technological unemployment’ certainly triggers fears in today’s technically fraught digital society, even though it isn’t a recent statement. Economist John Maynard Keynes first used it in 1933 while lamenting the fact that ‘economizing the use of labor’ (through the adoption of steam and mechanically produced power sources) had reduced the need for human effort without, at the same time, providing an alternative occupation for the displaced workers. His concern remains valid, as workers in all industries – and at all levels of industry – assess whether their workplace activities are susceptible to digital automation. At this stage of the ‘4th Industrial Revolution,’ it seems like no job or position is off limits to a potential AI intervention.

One industry professional is actively researching which industries or sectors are most susceptible to experiencing technological unemployment based on the types of jobs they typically offer. Cameron Sublett, the director of the Educational Leadership & Policy Studies Department at the University of Tennessee, Knoxville, has been researching the potential impact of AI on education, particularly Career & Technical Education (CTE).

His analysis indicates that occupations that involve research, thinking, and analysis are not good candidates for an AI-driven replacement. These jobs require higher levels of education (bachelor’s degrees and beyond) and the mastery of transferable skills like critical thinking, creativity, etc. The skills are transferable because they can be deployed in many occupations and settings.

Alternatively, sectors most vulnerable to an AI takeover are those that demand technical skills, which require specialized knowledge of specific machines or processes. In many cases, technical skills are easily replaceable by AI and machines because they command little or no ‘thinking’ to complete.

Sublett then applied his assessment to the 16 Career Clusters outlined in the National Career Clusters® Framework to determine which careers (and therefore which training programs) will be most impacted by the adoption of the emerging ‘AI workforce.’ Reviewing that data through an educational achievement lens (high school diploma, associate degree, and bachelor’s degree), he identified which career clusters are more likely to see AI interventions sooner rather than later:

Those clusters with high numbers of high school graduates as workers are the most vulnerable to technological unemployment. Their jobs are typically rote and mechanical, making them easier to automate. These industries include architecture and construction, hospitality and tourism, manufacturing, and transportation.

Those clusters most reliant on transferable skills are least likely to see an influx of machines because they rely on human creativity and ingenuity to thrive. These industries include business management, health services, human services, and legal services.

Across all clusters, higher levels of education apparently act as barriers (or at least hurdles) to the automation threat, at least for now.

Based on his data, Sublett suggests that today’s CTE courses and programs can do a better job of integrating transferable skills into technical training. He posits that well-trained CTE graduates should be better prepared for automation than their BA or MA counterparts because their certification programs can more easily integrate the two skill sets into current training programs. Recently expanded funding for CTE programs can help move that integration process forward.

 

AI as an Education Enhancer

Sublett isn’t alone in his assessment that current CTE training programs can be modified to embrace the power and opportunities of AI and GAI. The World Economic Forum (WEF) is also advocating for changes in traditional technical educational programs to embrace AI and all its opportunities.

The WEF sees promise in AI’s capacity to not just help learners learn but also to help teachers teach. The organization notes that finding a solution is not the only aspect of a complete project; clearly and accurately defining the problem is also a critical element of project success. It supports a new learning method, the PAIR Framework (problem, AI, interaction, reflection) that guides users through the AI assessment process by assisting them to:

formulate the problem,

explore and find the most relevant AI tools,

interact with both AI tools and relevant facts, and

reflect on the process, their conclusions, and their experiences with the technology.

The framework provides a strategy for familiarizing users with AI’s capabilities and then assessing its effectiveness before determining whether the digital assistance was beneficial to their project. The framework also gives new AI users the skills needed to develop and utilize transferable skills, a skill set itself that can be applied to any training program or occupation.

For teachers, AI also expands the capacity to reach every student with lessons and plans personalized for the individual. Using an AI resource, the teacher can modify course materials to match the needs of the specific student and then use it again to assess how well the learner is mastering the work. As a teaching tool, AI has the capacity to effectively respond to teacher shortages and educator burnout by vastly expanding the reach and scope of every class and program to meet the needs of all learners.

 

Despite the potential it poses to upend much of the world’s workforce, emerging research suggests that AI might instead provide more people with better skills and more opportunities to improve their economic situation. Schools that add AI training to their existing programs will expand their students’ capacities beyond the risk of technological unemployment.

AI in Industry: An Evolution in Progress

A recent Deloitte survey of over 2800 corporate directors and C-Suite leaders reveals that Artificial Intelligence (AI) is making significant inroads into the workings of thousands of national and global enterprises. Representatives from six industries and 16 countries offered their insights and experiences about how their organizations are using the technology. Their observations illustrate AI’s broad reach and deep impact on their company, community, and industry.

Adoption Across the Industrial Board …

Just one reason why AI—and more specifically ‘generative‘ AI—has been embraced as swiftly as it has is its capacity to solve enterprise-focused challenges that had previously been difficult or impossible to manage. Access to almost infinite levels of data gathered across multiple databases facilitates a more comprehensive analysis of an issue than has ever before been available. Business leaders are seeing that potential and turning to the tech to gain insights and guidance they previously could not find. There are opportunities for AI-empowered actions in almost every industry, as is demonstrated by a recent report on the burgeoning number of AI use cases:

Retail

Retail organizations are using AI to provide several services that are critical to their success.

‘Bots’ now respond to a high percentage of customer service calls, directing callers to the department best suited for their inquiry.

The tech is also enhancing the shopper’s experience by providing more detailed and appropriate suggestions based on their input and history.

Logistics and Travel

Businesses that provide transport or cater to travelers are also embracing AI as a service-enhancing tool:

AI engines can find more and better transport options, whether for an individual heading across the state or a shipment of goods moving across oceans.

AI programs and embedded sensors combine to provide minute-to-minute oversight of supply chain products, from the moment they enter the production cycle through to their delivery to their ultimate user. Tracing the passage of goods through service centers and across international borders has never been easier.

The software also informs systems developers of potential bottlenecks that can threaten delivery times early enough to prevent that gaff from occurring.

Financial Services

‘Money’ isn’t always cash these days, and AI is assisting billions of people to access the financial resources they seek, regardless of the currency they’re using or the purchases they’re making.

Banks are streamlining their offerings to accommodate their increasingly tech-savvy financial customers, often providing personalized planning tools designed for the unique person or entity in question.

Insurance companies are using AI to evaluate claim data, establish legitimate claims, and uncover potential or actual fraud cases. They’re also streamlining their policies to better reflect actual risk levels based on AI-enabled risk assessment capabilities.

Energy

Both traditional and emerging energy resources are fine-tuning their activities based on insights gleaned from AI programming.

Traditional energy providers use the technology to improve efficiencies within their plants and systems, often by automating services and using sensors to track metrics, performance, maintenance, and other relevant elements.

They’re also using AI to facilitate and track the growing inputs of renewable resources into the nation’s power grid.

Healthcare

The healthcare services spectrum is perhaps the most invested in AI.

AI is proving invaluable as a tool to streamline administrative systems to make them more efficient and effective.

It is also connecting medical teams with emerging data that is relevant to their shared patient. With each specialist and team having virtually instant access to developing healthcare needs, the patient can receive the best possible care for their particular condition regardless of whose office they happen to be in.

The pharmaceutical industry is also using AI to improve its services to the healthcare industry. Automated software eases data collection and analysis of the enrollment in and the running of clinical trials, ensuring their proper execution according to industry standards. The resulting information informs drug developers of needs, threats, and other relevant factors impacting the future of a potential medicine or therapy.

Even in its current, relatively raw, and unregulated state, the use of AI is gaining significant ground in almost all industries to perform an ever-growing list of services and occupations.

… For an Almost Unlimited Number of Purposes

While its popularity for automation implementation and control continues to grow, many companies report using their AI resources for one or more of three specific functions. The Deloitte report notes that most early AI adopters focused their investments on improving corporate efficiencies, increasing productivity, and reducing costs. Of those survey respondents:

56% reported that their AI investments were making their organization more efficient, while

35% reported their costs had shrunk because of the technology.

Almost one in three (29%) reported they experienced enhanced product values and services as a result of their AI implementation.

Other surveys show that companies are using the technology to perform a myriad of services beyond achieving better efficiency or reducing costs:

AI programming can oversee the inner workings of almost any digital system, so its capacity to optimize website reliability and uptime for example is unmatched. The AI overlay can detect potential site or data disruptions before they cause problems, and its monitoring capacity ensures that all elements of the organization remain in sync with corporate goals and initiatives.

AI’s predictive maintenance capability is saving companies money, too. GE and Rolls-Royce, for example, are using it to analyze aircraft engine performance to both catch wear-and-tear issues before they become failures, as well as to track exhaust metrics and other relevant environmental concerns typically found in the aeronautics field.

Workforce optimization is another industry aspect being transformed by the technology. To both enhance productivity and reduce costs, many companies are using AI to manage workforce scheduling duties by incorporating into the analysis factors such as employee availability, worker skill sets per project, and customer traffic. Small and big organizations, including Target®, Costco®, and Starbucks®, use AI to ensure they are optimizing their workforce metrics as well as keeping their customers happy.

 

While still a relatively new resource, both the current capacity and future promise of AI as a business tool can’t be underestimated. It offers unmatched opportunities for growth and, when used properly and safely, may solve some of the world’s most intractable problems.

 

AI Adoption in America: What, Where, and Why

Pam Sornson, JD

April 2, 2024

A recent Burning Glass Institute report (BGI) analyzed which U.S. regions were most prepared to embrace Artificial Intelligence (AI) as an economic development tool, which many consider to be an indicator of future economic growth. Yes, almost every region of the country would benefit from utilizing the more sophisticated technological base, but most have not yet invested in the foundational infrastructure to support that digital evolution.

The BGI analysts compared the occurrence of legacy tech skills versus AI-based skills – what BGI calls ‘Frontier Skills” – in communities across the country to determine which geographical area would see the most AI-driven expansion, both in its workforce and its economy. Their findings were sometimes surprising:

Not all digital skill sets are the same;

Industries evolve differently depending on their location and local resources, and

Not all industries lend themselves to an early or comprehensive adoption of the still unpolished computing opportunity.

While AI resources are advancing across all regions, only a few are truly prepared to maximize the opportunities it presents right now.

 

What Needs Doing: Legacy Expertise versus AI Frontier Skills

Fundamentally, AI and legacy skills match at the most basic level. Every digital tool – AI or otherwise – needs to be:

programmed for launch and then reprogrammed over time as needs evolve;

continuously managed to ensure full functionality, and

secured to ensure no inappropriate intrusions or actions can threaten its performance.

Companies using technology in any capacity typically have an IT department to manage these functions and maintain their productivity and safety. Further, as digital technology permeates more elements of the industrial complex, there will always be demand for these types of skills.

AI programming, however, requires a different set of skills over and above those fundamental actions. In addition to simply reliably running its program, AI software also adds services not found in non-AI tech:

Machine Learning incorporates neural networks, a foundational database structure modeled on the workings of the human brain, to facilitate ‘deep learning’ programming that can ‘read’ disparate data types like images, audio, and text to discern insights and make predictions. The neural network exchanges data across its nodes to ‘learn’ from other information caches, find and fix mistakes, and improve its functionality without additional human intervention. Over one-third of patent submissions in the past ten years contain a ‘machine learning’ (ML) capacity, indicating its popularity as a digital business tool.

Computer vision is another AI-related technology that requires upskilled tech training. Computer vision facilitates the software’s capacity to ‘see’ the data it’s aimed at and derive and act on the information gleaned from those sources. This AI technology collects images of environmental elements using cameras, sensors, and algorithms. It then identifies factors that indicate locations, threats, and other relevant elements to inform the AI system’s ‘decisions.’ This technology is a critical element of a ‘self-driving’ vehicle.

Natural Language Processing (NLP) is also an AI component. It can be ‘rule-based’ (driven by programming specific to the entity) or ML-based (driven by both rules and the results of countless inquiries and searches). NLP seeks to understand the meaning of text and voice inputs so it can respond to both written and oral data. Its use in ‘chatbots,’ robots programmed to respond to written inquiries, and ‘digital assistants,’ like Alexa and Siri, has revolutionized how many people use their digital devices.

The demand for AI-specific programmers is large and growing, with one in three companies saying they can’t move forward with AI adoption because they lack the technically skilled workforce needed to do so. However, those current programmers with computer science degrees and a mastery of logic, reasoning, and problem-solving can attain AI skills by pursuing degrees within that specific field, assuming they can find a school that offers the training. They’d be well advised to take that path: companies that have already implemented their AI strategy are reporting their intentions to increase their investments – and consequently, their Frontier Skilled workforce – in 2024 and beyond.

 

Where AI Skills Are Most Concentrated

The BGI report analyzes Frontier Skills capacities in metro areas based on the size of the community, with large ‘metropolitan statistical areas’ (MSAs) comprising cities with 25,000+ tech workers, medium-sized MSAs with tech workforces numbering between 5,000 and 25,000, and small MSAs that are home to 5,000 or fewer tech workers.

Not surprisingly, those regions and urban metropolises that have already invested in tech- and data-enabled economies are leading the country in their AI adoption processes, although in many cases, the capacity for local industries to embrace AI also impacts its adoption rate.

Three large MSAs – Seattle (first), San Jose (second), and San Francisco (third) – lead the country in Frontier Skills concentrations due to their underlying foundations of ‘technology’ as an industry in and of itself. Los Angeles-Long Beach-Anaheim ranks 8th on this list.

Given their histories as ‘tech-heavy’ economies, San Diego, Austin, Boston, and New York are also high on the list of large MSA early adopters.

Notably, while the Washington D.C. MSA is home to one of the largest tech-based workforces in the country, its industries are mainly defense and government contracting, which are typically based on legacy technology. Of the 27 large-sized MSAs identified by the BGI, Washington D.C. ranked 21st, behind less obvious contenders Detroit (18th), Kansas City (19th), and Philadelphia (17th).

Utah’s burgeoning ‘Silicon Slopes’—Provo-Orem (1st), Ogden-Clearfield (10th), and Salt Lake City (3rd)—have propelled the region to the top of the mid-sized MSA category.

Surprisingly, Fayetteville-Springfield-Rogers Arkansas is number 2 on the mid-sized MSA list due mainly to the tech-forward presence of Walmart. Walmart has been investing in advanced technologies for years, so its early adoption of AI is not unexpected.

MSAs actively growing in population are not also enlarging their Frontier Skilled workforce. Miami, Houston, and Dallas lag in the bottom half of the large MSA list, at 14th, 26th, and 22nd, respectively, primarily because they’ve not yet developed a dedicated, tech-focused workforce.

Overall, across the four geographical regions—West, Midwest, Northeast, and South—the West’s workforce dominates the country with its Frontier Skills concentrations, while the South lags behind the rest of the nation.

 

The BGI document reveals how America is managing the deluge of AI-enabled business opportunities now flooding its databases. Organizations intent on building their AI-fueled “Frontier Skilled” workforce can look to the successes being had in the various regions to ensure their AI adoption strategy is one that promises similar rewards.

 

AI in America: What’s Happening Here

Pam Sornson, JD

March 19, 2024

The United States, as a nation and along with several of its states, is pursuing its goals of legislating governance mandates over the use of artificial intelligence within its jurisdictions. Appropriately concerned about the threats posed by the technology, as well as enthused by its benefits and opportunities, political leaders are seeking to gain some form of control over the as-yet unregulated digital capacity before it becomes too deeply embedded in society in its present ‘wild west’ state.

 

Personal Problem. National Challenge.

The demand for AI regulation grows daily as more individuals experience fraud and loss caused by nefarious AI operatives. The increase in fraud attempts is growing exponentially as the technology infiltrates unregulated – and therefore unprotected – corporate databanks. Messages sent through all channels now mimic the authentic ones sent by trusted merchants and service providers, confusing and misleading their recipients.

Federal agencies are very aware of the challenge: “Fraudsters are using AI tools to impersonate individuals with eerie precision and at a much wider scale. With voice cloning and other AI-driven scams on the rise, protecting Americans from impersonator fraud is more critical than ever,” said FTC Chair Lina M. Khan. The FTC (Federal Trade Commission) has already finalized a rule prohibiting the impersonation of businesses and government offices; it’s now working on a similar regulation banning the impersonation of people.

The FTC’s action is just one avenue America is pursuing in its quest to gain control over rampant AI interferences within its territories. In fact, the nation launched its official AI management strategy in 2019 when the National Institute of Standards and Technology (NIST) released its Plan for Federal AI Standards Engagement. Back then, the main goal was to provide guidance for setting priorities and levels of government oversight into future AI developments to “speed the pace of reliable, robust, and trustworthy AI technology development.” Fast forward four years, and the new goal set includes stopping the overwhelming influx of unwanted AI programming while harnessing its emerging technological capacities to improve national fortunes.

 

Interim Steps

In the intervening years, the United States has made progress in its effort to manage AI resources:

In May 2021, NIST developed its AI Standards Coordination Working Group (AISCWG) to “facilitate the coordination of federal government agency activities related to the development and use of AI standards, and to develop recommendations relating to AI standards [for] the Interagency Committee on Standards Policy (ICSP) as appropriate.”

Also in 2021, the National Defense Authorization Action of 2021 (NDAA) specifically authorized NIST to develop AI parameters for the Department of Defense, Department of Energy national security programs, Department of State, and the Intelligence Community. President Biden signed the ‘action’ into law in December 2023.

The National Science Foundation now offers grants in support of AI research and development aimed at ensuring access to reliable and trusted technology. From this source have arisen the National Artificial Intelligence Research Institutes, which enlist public and private entities to collaborate on potential responses to AI evolutions, both positive and negative.

The U.S. Department of State is busy working with international organizations and governments to integrate wide-ranging AI regulatory efforts into a cohesive whole. The agency strongly supported the principles developed by the Organisation for Economic Co-operation and Development (OECD). It is also a member of the Global Partnership on Artificial Intelligence, which is housed in the OEDC and works to connect the theories underlying AI programming and the practices that emerge in its development.

Progress by these agencies continues to be made.

 

Widespread Federal Efforts

Today, numerous federal agencies are engaged in AI research and development to improve processes, reduce risk and loss, and enhance their capacity to serve their constituents. The national General Accounting Office (GAO) monitors those activities and acts as an overseer of anything that affects the country as a whole, including AI initiatives. The GAO has developed an AI Accountability Framework that guides individual agencies in their AI efforts regarding data management, information monitoring, systems governance, and entity performance. From the federal point of view, AI programming emerging from these departments must be responsible, reliable, equitable, traceable, and governable. The framework guides each agency as it initiates its AI systems to ensure they’re compatible with those mandate standards.

The GAO also tracks those efforts and reports on their progress, and its recent December 2023 report reveals strides being made – and steps to be taken:

Twenty of 23 agencies reported current or expected AI ‘use cases’ where AI would be used to resolve an issue or problem. Of those, NASA and the Departments of Commerce and Energy have found the most number of situations where AI will help national efforts (390, 285, and 117, respectively). More than 1,000 possible options have been surfaced across the government.

There were only 200 or so instances of AI in practice as of 2022, while more than 500 were in the planning phase.

Ten of the 23 agencies had fully implemented all the AI standards mandated for their agency, while 12 had made progress but had not completed those tasks. The Department of Defense was exempted from this review because it was issued other AI mandates to follow.

The GAO report also provides recommendations for 19 of the 23 agencies, which include next steps into 2024 and beyond. For the most part, these recommendations focus on ensuring that organizations downstream from their national overseer (including federal, state, and regional agencies) have the guidance and standards they need to provide appropriate AI implementation within their area. Some of those recommendations include adding AI-dedicated personnel, enhancing AI-capable technologies, and ensuring a labor force that is well-versed in AI operations.

Individual states are also developing AI management resources, although those are focused on in-state needs and opportunities. The White House has devised a new Bill of Rights for AI, and newly proposed federal regulations will impact both the focus and the trajectory of AI activities in the years to come.

 

Artificial Intelligence has arrived, and its influence continues to grow. Gaining control over that growth will allow it to enhance the lives of all of humanity. The United States government is dedicated to embracing its control of AI because failing to master it for appropriate purposes poses potentially existential threats to the entire planet.

 

 

 

AI Regulation: The Current State of International Affairs

Pam Sornson, JD

March 19, 2024

It is a decided understatement to say that there are legitimate concerns about entities using Artificial Intelligence (AI) for nefarious purposes. The opportunities for its misuse are significant, considering its capacity to generate documents, images, and sounds that present as 100% authentic and true.

Accordingly, leaders across most community sectors are discussing and developing rule sets designed to govern the use of the technology. Once those rules are established and embraced, the development and adoption of correlating enforcement strategies and standards will keep the world safe from the misuse of this unprecedented computing capability. At least, that’s the plan …

 

AI Governance Drivers

Governments, industrial leaders, and technology experts all agree that the threats posed by AI are immense.

In July 2023, United Nations Secretary-General Antonio Guterres warned the world that unchecked AI resources could “cause horrific levels of death and destruction” and that, if used for criminal or illicit state purposes, it could generate “deep psychological damage” to the global order.

Also in Summer 2023, the International Association of Privacy Professionals (IAPP) analyzed AI adoption practices in numerous settings to evaluate how regulators might address issues that have already arisen. Recent court cases show that the protections previously enjoyed by software developers might be eroding as more civil liberty, intellectual property, and privacy cases with AI-based fact patterns hit the courts. As ‘matters of first impression,’ the results of these early lawsuits will be the foundation of what will undoubtedly become an immense new body of laws.

All by itself, Generative AI is causing much consternation in computer labs and C-Suites around the world. AI vendors have utilized vast quantities of web-based copyrighted data as part of the software’s ‘training materials,’ and the owners of those copyrights aren’t happy that their work has been co-opted without their permission.

As has been the case with all technology, the promise of AI and all its permutations creates as many concerns as it does possibilities. The world is now grappling to get those concerns under control.

 

AI Governance Inputs

Entities invested in putting controls around AI threats are also working on enforcement systems to ensure compliance by AI proponents. Three early entries into the fray help to outline the scope, depth, and breadth of the challenge all AI developers are now facing as their products become more ubiquitous in the world:

The Asilomar Principles

While AI programming itself has been around for a while, articulated considerations about its safe usage are relatively new. In 2017, the Future of Life Institute gathered a group of concerned individuals to explore the full range of its opportunities and issues. The resulting ‘Asilomar Principles‘ set out 23 guidelines that parse AI development activities into three categories: Research, Ethics and Values, and Longer-Term Issues.

The OECD

The work of the Organisation for Economic Cooperation and Development (OECD) is also notable. The OECD works to establish uniform policy standards related to the economic growth of its 37 market-based, democratic members. These 37 countries research, inaugurate, and share development activities across more than 20 policy areas involving economic, social, and natural resources. As a group, the OECD considers AI to be a general-purpose technology that belongs to no one country or entity. Accordingly, its members agree that its use should be based on international, multi-stakeholder, and multidisciplinary cooperation to ensure that its implementation benefits all people as well as the planet.

The OECD parses its work into two main categories, each of which has five subparts.

Its Values-Based Principles focus on the sustainable development of AI stewardship to provide inclusive growth and well-being opportunities that pursue humanitarian and fair goals. AI developers should make their work transparent and understandable to most users while ensuring the validity of its safety and security capacities, and they should be – and be held – accountable for the programs they create.

For policymakers, the OECD recommends establishing international standards that define the safe and transparent development of AI technologies that function compatably within the existing digital ecosystem. Policy environments should be flexible to encourage the growth and innovation of the software while protecting human rights and also limiting its capacity to be used for less than honorable reasons.

The European Union

On Wednesday, March 14, 2024, the European Union voted to implement its ‘Artificial Intelligence Act,’ the world’s first set of AI regulations. It took three years of negotiations, data wrangling, and intense discussions to achieve … “historic legislation [that] sets a balanced approach [to] guarantee [our] citizen’s rights while encouraging innovation” around the development of AI. The full EU Parliament is expected to vote on the new laws in April; there is no expectation of significant opposition to it.

Fundamentally, the AI Act is focused on limiting the risks AI presents, especially in certain situations. Some AI programming provides rote actions that don’t require intense analysis or oversight, so they don’t need excessively intense regulation, either. Other AI platforms, however, incorporate more sensitive data in their computations, such as those that use private biometric or financial data. Inappropriate use of such sensitive information could be disastrous for the person or people exposed, and the scope of AI technology has the capacity to expand that risk to a much greater scale. The AI Act requires developers to prove their model’s trustworthiness and transparency, as well as demonstrate that it respects personal privacy laws and doesn’t discriminate. Entities that are found to be non-compliant with the AI Act risk a fine of up to 7% of their global annual profits.

 

These are just three of the many governments and organizations working to gain control over the use of AI within their jurisdictions. Leaders in the United States are also focused on the concern. The second article in this edition of the Pulse looks at what’s happening in America as it, too, reels from the immense and growing impact of AI on virtually all of its systems and communities.

 

AI Regulation: Fears and a Framework

Pam Sornson, JD

March 5, 2024

Even while artificial intelligence (AI) offers immense promise, it also threatens equally immense disaster, at least in the minds of industry professionals who recently weighed in on the topic. In a March 2023 letter, more than 350 technology leaders shared with policymakers their concerns about the possible dangers AI poses in its present, unfettered iteration. Their concerns are notable, not just because of the obvious concerns raised by a technology that already closely mimics human activities but also because of the role these AI pioneers have played in designing, developing, and propagating the technology around the world. Requesting a ‘pause‘ in further development of the software, the signatories suggest that stopping AI progress until implementable rules are created and adopted would be beneficial. The break would allow for the implementation of standards that would prevent the technology’s evolution into a damaging and uncontrollable force.

 

Industry Consensus: “AI Regulation Should be a Global Priority”

The 23-word letter is concise in its warning: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” The stark statement embodies the fears arising among industry leaders as they watch unimpeded AI technology permeate more and larger sectors of the global community.

The letter was published by the Center for AI Safety (CAIS), a non-profit agency dedicated to reducing large-scale risks posed by AI. Dan Hendrycks, its executive director, told Canada’s CBC News that industrial competition based on AI tech has led to a form of ‘AI arms race,’ similar to the nuclear arms race that dominated world news through the 20th Century. He further asserts that AI could go wrong in many ways as it’s used today and that ‘autonomously developed’ AI – technology generated by its own function – raises equally significant concerns.

Hendrycks’ thoughts are shared by those who signed the letter, three of whom are considered the ‘Godfathers’ of the AI evolution: Geoffry HintonYoshua Bengio, and Yann LeCun (their work to advance AI technology won the trio the 2019 Turing Award, the Nobel Prize of computing). Other notable signers include executives from Google (Lila Ibrahim, COO of Google Deepmind), Microsoft (CTO Kevin Scott), OpenAI (CEO Sam Altman), and Gates Ventures (CEO Bill Gates), who joined the group in raising the issue with global information security leaders. Analysts with eyes on the tech agree that its threat justifies worldwide collaboration. ‘Like climate change, AI will impact the lives of everyone on the planet,’ says technology analyst and journalist Carmi Levy. The collective message to world leaders: “Companies need to step up … [but] … Government needs to move faster.”

 

Where to Begin: Regulating AI

Even before the letter was released, the U.S. National Institute of Standards and Technology (NIST) was already working on promulgating AI user rules and strategies in the United States. Its January 2023 “AI Risk Management Framework 1.0” (AI RMF 1.0) set the initial parameters for America’s embrace of AI regulation, referring to AI as “an engineered or machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments (adapted from: OECD Recommendation on AI:2019; ISO/IEC 22989:2022).”

Define the Challenge

Framing the Risk

The NIST experts parse AI threats into categories with other, similar national security concerns, such as domestic terrorism or international espionage. Styles of AI technology, for example, can pose

short- or long-term concerns,

with higher or lower probabilities of occurring, and

with the capability of emerging locally, regionally, nationally, or in any combination of the three.

Managing those risks within those parameters requires accurate

assessment,

measurement, and

prioritization, as well as

an analysis of relevant risk tolerance by users.

Assessing Programming Trustworthiness

At the same time, the agency is looking for ways that the technology is already safe to use. Factors that play into this assessment include the software’s:

validity and reliability for its purpose,

inherent security programming,

accountability and resiliency capacities,

its functional transparency, and

its overall fairness for users, to name just a few.

Generating Framework Functions

At its heart, the AI RNF 1.0 enables conversations, comprehensions, and actions that facilitate the safe development of risk management practices for AI users. Its four primary functions follow a process that creates an overarching culture of AI risk management, populated by the steps needed to reduce AI risks while optimizing AI benefits.

Develop the Protocols

Govern: The ‘Govern’ function outlines the strategies that seek out, clarify, and manage risks arising from AI activities. Users familiar with the work of the enterprise can identify and reconfigure those circumstances where an AI program might deviate from existing safety or security requirements. Subcategories within the ‘Govern’ functions apply to current legal and regulatory demands, policy sets, active practices, and the entity’s overarching risk management schematic, among many other factors.

Map: AI functions interact with many interdependent resources, each of which poses its own independent security concern. AI implementation requires an analysis of each independent resource and how it would impact or impede AI adoption and safe usage. This function anticipates that safe and appropriate AI decisions can be inadvertently rendered unsafe by later inputs and determinations. Anticipating these challenges early on reduces their opportunity to cause harm in the future.

Measure: This step naturally follows the Map function, directing workers to assess, benchmark, and monitor apparent risks while keeping alert to emerging risks. Comprehensive data collection pursuant to relevant performance metrics for functionality and safety/security gives entities control over how their AI implementation functions within their organization and industry as it is launched and throughout its productive lifecycle.

Manage: Activities involved in managing AI risks are outlined in the Govern functions, giving organizations strategies to respond to, oversee, and recover from AI-involved incidents or events. The Managing function anticipates that oversight of AI risks and concerns will be incorporated into enterprise actions in the same way that other industry standards and mandates are followed.

 

America is one of many entities working to establish viable controls over a potentially uncontrollable resource. Other nations, groups of nations, industries, and public and private companies are all engaged in creating regulations that allow for the fullest implementation of AI for the public good while reducing its existential threat to the world. The optimal outcome of all these efforts is a safe and secure technology that advances humankind’s best interests while helping it reduce the challenges it creates for itself.

AI Regulation: A Mandate for Management

Pam Sornson, JD

March 5, 2024

On October 30, 2023, President Joe Biden issued an Executive Order compelling the development of new safety and security standards for Artificial Intelligence (AI) technologies. By doing so, the President acknowledged the role AI is already playing – and will continue to play – in the nation’s economic, industrial, and social development. He urges interested and invested parties to protect the community and promote fair and reasonable ideals when adopting its use. The Order represents just one aspect of the current global push to gain control over the ethical use of AI.

 

Organizing Innovation …

Regulating AI technology presents a unique set of challenges that must be addressed if the digital asset is to be safe to use and add more value than hazard to the community. A myriad of individual AI opportunities coalesce into a constellation of concerns raised by the programming. Each iteration and issue individually requires a dedicated regulatory response; together, they present a massive mission to the regulatory agencies responsible for AI’s oversight. The challenge is to coordinate the global effort to put guardrails around the tech to optimize its assets without unnecessarily impinging on its capacities.

 

… To Enhance the Good …

Just a short list of adopted uses for AI reveals the scope and extent of rules needed to maintain its integrity:

Connectivity – Assets included in AI programming are already connecting services and their providers to create an enhanced team capable of more together than each offers individually. In the healthcare sector, as an example, AI-powered programs are already conducting triage functions, issuing preliminary diagnoses, and sharing critical health information with far-flung medical team professionals at a pace unmatched by traditional collaborative methods.

Energy Management – The ever-expanding ‘distributed’ energy sector is also embracing AI opportunities to improve performance and build in reliability. Traditional community-wide power systems used centralized power stations to direct energy resources to their customers. Over the past few decades, however, adding in-home power options (solar, wind, and geothermal) resulted in a reversal of power flow. These resources now feed energy into the shared grid, and owners experience reduced energy costs or even receive revenue checks for their contribution to the network supply. The consequence is a power system that is vastly more complex than the original version, and the new iteration requires much more hands-on control. Emerging AI programs promise to streamline and coordinate that evolution.

Logistics – The COVID-19 era demonstrated the critical role that logistics play in the global economy, as supply chains failed, leaving millions of people and businesses without the goods and services they needed. AI in this sector is revolutionizing supply chain management and control. Automated warehouses populated by robots that don’t eat or sleep now provide a significant proportion of the physical labor involved, while sensors and cameras track the movement of goods through the system from their original creation to their ultimate destination. The efficiency level rises impressively in AI-enhanced facilities, which promises even more innovation to further the adoption of AI programming into the industry.

Each of these sectors involves thousands of companies and millions of people, all of which can influence the integrity of the respective system. Without regulation, the actions of any one person or entity can generate disastrous consequences for the others.

 

… And Manage the Bad.

Another short list of AI realities reveals the threats that an unregulated resource poses:

Job Losses – Automation – using robots to perform functions previously managed by humans – has already caused millions of job losses. In 2023, 37% of businesses responding to a New York-based survey stated they had replaced human workers with automated, AI-driven programming. Customer service workers were at the top of the list to be eliminated, followed by researchers and records analysts. Almost half (44%) of survey respondents indicated that more layoffs were likely in 2024.

Disinformation – The term ‘fake news‘ became ubiquitous in the past decade as political entities sought to control voters using false information to influence their actions. Using AI to produce and share inaccurate or manufactured data creates fundamental challenges for every entity that relies on its accuracy and veracity to maintain its credibility. When taken to extremes, the false commentary amounts to propaganda, which some assert is the ‘world’s ‘biggest short-term threat.’

Security – Data security – keeping personal, corporate, and governmental information safely secured behind impregnable walls – has been top-of-mind in most industries for a long time, and vast magnitudes of rules and regulations have evolved to protect it. AI presents a novel iteration – it generates ‘new’ information that may or may not be accurate or truthful. The computer program that created this ‘new’ information isn’t ‘aware’ of or concerned about its veracity (computers don’t ‘think’) and treats it like any other data bit within its reach. Consequently, the responses returned by the program are only as accurate and reliable as its affiliated databases. When those aren’t reliable, the AI response won’t be trustworthy either. Researchers note that AI software can be ‘gullible’ and produce manipulated responses when fed leading or misleading questions. It is also corruptible. Programmers can manipulate AI’s function by ‘poisoning’ its databases with false data. The action trains the technology to respond according to those directives and not pursuant to legitimate AI functioning. Without focused interventions, AI has the capacity to perpetuate biases and inequities when those influences are already programmed into its information stores.

 

AI presents an infinite number of iterations and permutations that can be used for good – or evil. Without any sort of regulation on its use, its misuse poses a significant threat to virtually every corner of the global community. And, with so many entities now contributing to the AI universe, both legitimate and illicit, governing bodies are appropriately focused on putting guard rails on all AI efforts in an effort to maintain global stability and industrial reliability. The President’s Executive Order is one step toward gaining control over this emerging technology that offers so much promise for a healthier, more productive future for the world.

Generative AI: A Digital Double-Edged Sword

Pam Sornson, JD

February 20, 2024

The benefits of embracing artificial intelligence (AI) are apparent. Already, the software is providing previously unimaginable service to many industries, such as speeding access to healthcare, streamlining education processes, and resolving previously unsolvable problems. However, as bright as its promise appears to be, the programming also poses significant threats to the entities that rely on it for both work and life functions, including today’s typical consumers. Any person or company that enjoys the assets offered by AI should also be aware of its vulnerabilities so that they don’t become victims of the burgeoning AI crime sector.

 

 

A Wolf in Sheep’s Clothing?

Generative AI (GenAI) is a subset of the same programming that provides ‘traditional’ AI functions. They are both ‘trained’ software programs, meaning that they are each designed to pursue specific goals in relation to the also particular parameters of their developer’s strategy. Both styles scour billions of data bits to locate appropriate responses to user demands, and each provides data collection, analysis, and computing capacities that are far beyond those of humans.

However, the two AI aspects differ in one crucial way, and that difference is what makes GenAI so much more dangerous than the other:

Traditional AI offers responses based on the existing data within and the structure of its databases. It can only respond if that data is contained within its research resources.

GenAI, on the other hand, can create responses that are original to its search-and-create process. It takes existing information and spins it into creative narratives that are totally unique to it. It uses existing data as a learning template to develop new and original ‘thought.’ The challenge arises from the fact that it is often impossible to tell a human-generated piece of content from that generated by the computer. Consequently, since computers are not bound by the very human-mandated principles of ethics and morals, their products can appear completely valid and authentic even though they are rife with falsehoods and completely unreliable.

Simply put, GenAI can act as an independent ‘person’ by producing original content with the same sense of authority and integrity as that produced by its human counterpart but without adherence to the rules, regulations, and standards that bind that human effort.

 

Creative Chaos Ensues

The application of GenAI for less-than-honorable purposes is already rampant:

‘Deepfakes’ are images generated from real pictures that manipulate their truth into something it is not. This type of GenAI often combines data from a variety of sources to create a plausible, realistic ‘new’ version of those combined concepts. Nefarious operators often distort images of political figures (as examples) to reflect a depiction of their personal philosophy, thereby misleading their consumers into believing that the revised version is a truthful one.

‘Phishing’ offers another opportunity to distort or bury the truth from trusting users. Phishing activities typically use email or text prompts to entice consumers into clicking on links that appear to come from a trusted source and that offer a valuable and welcome service. However, those links actually direct the user into a ‘dark’ space on the ‘web where their confidential, personal information – birthdates, banking info, healthcare specifics, etc. – is quickly extricated and stolen.

ChatGPT is, perhaps, the best-known example of GenAI. This software produces human-like results that are often indistinguishable from those presented by actual humans. To users who are not aware of the concern, ChatGPT’s creative products are frequently accepted as valid and authentic, as if a person, not a machine, generated them.

There are many ways for nefarious GenAI to manipulate the community to achieve its developer’s intended goals:

Manipulating images also frequently violates copyright rules, as the developers rarely seek out the trustworthy source of their information. Organizations that use GenAI programming to pursue their proprietary ambitions might unknowingly run the risk of compromising another entity’s legally protected assets.

‘False’ images created with GenAI can also amplify existing biases and prejudices, making current social issues even more dangerous. A recent analysis of ~5,000 pictures created using text-to-image software (a form of GenAI) revealed that the images generated showed extreme gender and racial stereotypes. Unsuspecting viewers might believe that those destructive distortions are actually verified versions of reality.

As noted above, inappropriate disclosure of sensitive information is also problematic when the computer program doesn’t discern it or won’t conform to the boundaries designed to protect that data.

 

Harnessing GenAI for Good

GenAI still offers users tools to achieve their goals and ambitions despite the threat of misuse and exploitation marring its opportunity. By carefully learning and following directives for appropriate and valid GenAI uses, every entity can maximize the value provided by the software while reducing their exposure to inappropriate outcomes:

Use only publicly available GenAI tools. There are commercially available software packages that facilitate exceptional AI functions without exposing users to the challenges of the format. Many of these digital programs are designed to address the demands of specific industries, as well, so adopting the technology might also be a mandatory step to retain the company’s current market share.

Invest in customization services. Every organization operates differently from its competitors, and those distinctions are also often its unique ‘claim to fame’ within its sector. AI programming can be modified to both protect and highlight the existing presence while also adding new values and opportunities for consumers.

Add data feedback and analysis tools to ensure the software is – and remains – on target for its purpose. Data in all its forms is the true currency of society these days; corporate databases contain myriads of unexplored information that can be harnessed to produce even more success for the company. AI programs will use that newly discovered data to refine their functions, providing even more value to the enterprise.

 

Both traditional and generative AI offer tremendous benefits as well as potential threats to every organization. Accordingly, the process of adopting the technology and adapting it for proprietary use should be carefully mapped out and managed. Those companies that master the project can also gain significant advantages in a sector where such advances may still be unexplored, which was, after all, the purpose behind the development of artificial intelligence in the first place.

 

 

Instructional AI: Advancing Education’s Capacities

Pam Sornson, JD

February 20, 2024

Like virtually every other type of modern technology, the usage of ‘artificial intelligence’ (AI) has triggered great fear, effusive elation, and almost all emotions in between. However, also like other modern technology, AI’s bona fide impact on society is not fully understood as so many of its capabilities – real and potential – have yet to be explored. Despite the challenge of not knowing whether its threats are actual or imagined, AI is making inroads into many cultural and societal venues, including that of higher education. How is it in use today? How might it be used tomorrow? And is its presence in the classroom a boon or a bust for modern learners?

 

AI as a Ubiquitous Tool

Most people already interact with AI, even if they don’t know that’s the programming they’re accessing. The digital resource drives virtually all of today’s smart devices, using intuitive and insightful strategies to facilitate a myriad of services all through a single portal.

Mapping programs steer users through the physical world, offering directions, time-to-destination, traffic notices, and even restaurant and entertainment suggestions.

The name (a noun) of today’s most popular search engine is now also a verb, and many people use it to describe how they found their latest new gadget or problem solution.

Even customer service ‘providers’ are more AI today than they are human. The now ubiquitous ‘chatbot’ – literally a ro’bot’ that ‘chats’ with people – responds to most online inquiries and is often the only ‘human’ resource consumers have contact with when looking for answers to their concerns.

It’s most likely that even those who fear AI and its potential threats use the resource as a regular part of their typical day.

And that exposure to and use of AI – and its closely related cousin, machine learning – isn’t diminishing either. The technology has the capacity to improve itself, and many cutting-edge computer programs in use today are programmed to enhance their own internal functioning. The potential values offered by these ‘smart machines’ make them increasingly desirable as business assets, so investments in AI are growing at a notable rate.

‘Healthcare tech’ uses AI to capture data, generate reports, connect medical teams, and inform patients. ‘Telehealth,’ the delivery of medical advice over an internet connection, is becoming the most popular method for connecting with healthcare professionals. During the COVID-19 pandemic, the number of telehealth appointments rose from 5 million in 2020 to over 53 million in 2022.

‘Smart assistants,’ such as Siri and Alexa, now monitor home and office systems to modulate room temperature, lighting, door access, and more. In 2022, more than 120 million American adults were using these assistants at least once a month.

The use of digital payment portals is also on the rise. Most of today’s banks use AI technology to interact with all their customers, both individuals and businesses. The tech facilitates 24-hour access, online deposits, withdrawals, and other services and maintains monitoring capacities over billions of dollars of financial and other assets.

Analysis of how AI technology is growing as a fundamental corporate asset shows that it is also quickly becoming foundational to numerous existing and emerging industries. Many of these industries – and the countless companies that make them up – are embracing the AI opportunity in the post-COVID era to revise their economic foundation and that of their community.

 

Instructional AI in the Spotlight

Clearly, AI has been successfully embedded in both personal and corporate daily activities for some time, so it’s not surprising that many industries, including the education sector, have also adopted the technology into their daily activities. In schools, AI programs and applications are typically identified as ‘Instructional’ AI, and those, too, have been around for several years. Their uses span the gamut of educational functions:

In some cases, the tech is used to track student activities. A well-programmed AI service can track attendance, test scores, course availability, and other data-rich nuances of the learner’s experience to inform school admins about their progress and facilitate fine-tuning of their overall educational adventure.

AI is also proving to be extremely valuable in streamlining educational resources to meet the needs of each individual student. Data collected reveals where those learners are experiencing challenges and where those challenges are originating. Sometimes, it shows that the person needs additional support; other times, it demonstrates that the school has missed the mark for serving this particular person or class of people.

Perhaps the most common implementation of AI in educational processes, however, is its use in instructional design. The designers of programs, courses, and the classes that those comprise are constantly improving their resources to better meet student needs, with the goal of enhancing the learner’s absorption of the content to achieve ultimate success in their chosen subject matter. AI tools give these ‘education architects’ the capacity to deliver a deeply personalized course content that responds to the individual learner’s past performance, learning pace, and preferences. Further, in addition to structuring the course in a format more compatible with the students’ preferences, AI also facilitates adaptations to the system in real time. Data collected as the course unfolds informs the programming, which can then modify modules or lesson plans accordingly.

AI also offers an asset that’s already well-favored in the community: its gaming capacity. Game-based learning opportunities that are enhanced by AI can provide more individualized and flexible learning opportunities for virtually all students, which can also increase their likelihood of persisting to graduation and then finding the job and career that best suits their needs.

 

The use of AI as a learning tool is growing as more higher education institutions adopt it to service their myriad of programs and workflows. For students, the digital asset is proving to be an invaluable addition to their education, so long as they use it with integrity. For the higher education sector in general, AI also offers the opportunity to develop whole new avenues of courses and careers to meet the burgeoning demand for skilled AI technicians. At least at this first glance, AI is performing exceptionally well for the education community.