AI in America: What’s Happening Here

Pam Sornson, JD

March 19, 2024

The United States, as a nation and along with several of its states, is pursuing its goals of legislating governance mandates over the use of artificial intelligence within its jurisdictions. Appropriately concerned about the threats posed by the technology, as well as enthused by its benefits and opportunities, political leaders are seeking to gain some form of control over the as-yet unregulated digital capacity before it becomes too deeply embedded in society in its present ‘wild west’ state.

 

Personal Problem. National Challenge.

The demand for AI regulation grows daily as more individuals experience fraud and loss caused by nefarious AI operatives. The increase in fraud attempts is growing exponentially as the technology infiltrates unregulated – and therefore unprotected – corporate databanks. Messages sent through all channels now mimic the authentic ones sent by trusted merchants and service providers, confusing and misleading their recipients.

Federal agencies are very aware of the challenge: “Fraudsters are using AI tools to impersonate individuals with eerie precision and at a much wider scale. With voice cloning and other AI-driven scams on the rise, protecting Americans from impersonator fraud is more critical than ever,” said FTC Chair Lina M. Khan. The FTC (Federal Trade Commission) has already finalized a rule prohibiting the impersonation of businesses and government offices; it’s now working on a similar regulation banning the impersonation of people.

The FTC’s action is just one avenue America is pursuing in its quest to gain control over rampant AI interferences within its territories. In fact, the nation launched its official AI management strategy in 2019 when the National Institute of Standards and Technology (NIST) released its Plan for Federal AI Standards Engagement. Back then, the main goal was to provide guidance for setting priorities and levels of government oversight into future AI developments to “speed the pace of reliable, robust, and trustworthy AI technology development.” Fast forward four years, and the new goal set includes stopping the overwhelming influx of unwanted AI programming while harnessing its emerging technological capacities to improve national fortunes.

 

Interim Steps

In the intervening years, the United States has made progress in its effort to manage AI resources:

In May 2021, NIST developed its AI Standards Coordination Working Group (AISCWG) to “facilitate the coordination of federal government agency activities related to the development and use of AI standards, and to develop recommendations relating to AI standards [for] the Interagency Committee on Standards Policy (ICSP) as appropriate.”

Also in 2021, the National Defense Authorization Action of 2021 (NDAA) specifically authorized NIST to develop AI parameters for the Department of Defense, Department of Energy national security programs, Department of State, and the Intelligence Community. President Biden signed the ‘action’ into law in December 2023.

The National Science Foundation now offers grants in support of AI research and development aimed at ensuring access to reliable and trusted technology. From this source have arisen the National Artificial Intelligence Research Institutes, which enlist public and private entities to collaborate on potential responses to AI evolutions, both positive and negative.

The U.S. Department of State is busy working with international organizations and governments to integrate wide-ranging AI regulatory efforts into a cohesive whole. The agency strongly supported the principles developed by the Organisation for Economic Co-operation and Development (OECD). It is also a member of the Global Partnership on Artificial Intelligence, which is housed in the OEDC and works to connect the theories underlying AI programming and the practices that emerge in its development.

Progress by these agencies continues to be made.

 

Widespread Federal Efforts

Today, numerous federal agencies are engaged in AI research and development to improve processes, reduce risk and loss, and enhance their capacity to serve their constituents. The national General Accounting Office (GAO) monitors those activities and acts as an overseer of anything that affects the country as a whole, including AI initiatives. The GAO has developed an AI Accountability Framework that guides individual agencies in their AI efforts regarding data management, information monitoring, systems governance, and entity performance. From the federal point of view, AI programming emerging from these departments must be responsible, reliable, equitable, traceable, and governable. The framework guides each agency as it initiates its AI systems to ensure they’re compatible with those mandate standards.

The GAO also tracks those efforts and reports on their progress, and its recent December 2023 report reveals strides being made – and steps to be taken:

Twenty of 23 agencies reported current or expected AI ‘use cases’ where AI would be used to resolve an issue or problem. Of those, NASA and the Departments of Commerce and Energy have found the most number of situations where AI will help national efforts (390, 285, and 117, respectively). More than 1,000 possible options have been surfaced across the government.

There were only 200 or so instances of AI in practice as of 2022, while more than 500 were in the planning phase.

Ten of the 23 agencies had fully implemented all the AI standards mandated for their agency, while 12 had made progress but had not completed those tasks. The Department of Defense was exempted from this review because it was issued other AI mandates to follow.

The GAO report also provides recommendations for 19 of the 23 agencies, which include next steps into 2024 and beyond. For the most part, these recommendations focus on ensuring that organizations downstream from their national overseer (including federal, state, and regional agencies) have the guidance and standards they need to provide appropriate AI implementation within their area. Some of those recommendations include adding AI-dedicated personnel, enhancing AI-capable technologies, and ensuring a labor force that is well-versed in AI operations.

Individual states are also developing AI management resources, although those are focused on in-state needs and opportunities. The White House has devised a new Bill of Rights for AI, and newly proposed federal regulations will impact both the focus and the trajectory of AI activities in the years to come.

 

Artificial Intelligence has arrived, and its influence continues to grow. Gaining control over that growth will allow it to enhance the lives of all of humanity. The United States government is dedicated to embracing its control of AI because failing to master it for appropriate purposes poses potentially existential threats to the entire planet.

 

 

 

AI Regulation: The Current State of International Affairs

Pam Sornson, JD

March 19, 2024

It is a decided understatement to say that there are legitimate concerns about entities using Artificial Intelligence (AI) for nefarious purposes. The opportunities for its misuse are significant, considering its capacity to generate documents, images, and sounds that present as 100% authentic and true.

Accordingly, leaders across most community sectors are discussing and developing rule sets designed to govern the use of the technology. Once those rules are established and embraced, the development and adoption of correlating enforcement strategies and standards will keep the world safe from the misuse of this unprecedented computing capability. At least, that’s the plan …

 

AI Governance Drivers

Governments, industrial leaders, and technology experts all agree that the threats posed by AI are immense.

In July 2023, United Nations Secretary-General Antonio Guterres warned the world that unchecked AI resources could “cause horrific levels of death and destruction” and that, if used for criminal or illicit state purposes, it could generate “deep psychological damage” to the global order.

Also in Summer 2023, the International Association of Privacy Professionals (IAPP) analyzed AI adoption practices in numerous settings to evaluate how regulators might address issues that have already arisen. Recent court cases show that the protections previously enjoyed by software developers might be eroding as more civil liberty, intellectual property, and privacy cases with AI-based fact patterns hit the courts. As ‘matters of first impression,’ the results of these early lawsuits will be the foundation of what will undoubtedly become an immense new body of laws.

All by itself, Generative AI is causing much consternation in computer labs and C-Suites around the world. AI vendors have utilized vast quantities of web-based copyrighted data as part of the software’s ‘training materials,’ and the owners of those copyrights aren’t happy that their work has been co-opted without their permission.

As has been the case with all technology, the promise of AI and all its permutations creates as many concerns as it does possibilities. The world is now grappling to get those concerns under control.

 

AI Governance Inputs

Entities invested in putting controls around AI threats are also working on enforcement systems to ensure compliance by AI proponents. Three early entries into the fray help to outline the scope, depth, and breadth of the challenge all AI developers are now facing as their products become more ubiquitous in the world:

The Asilomar Principles

While AI programming itself has been around for a while, articulated considerations about its safe usage are relatively new. In 2017, the Future of Life Institute gathered a group of concerned individuals to explore the full range of its opportunities and issues. The resulting ‘Asilomar Principles‘ set out 23 guidelines that parse AI development activities into three categories: Research, Ethics and Values, and Longer-Term Issues.

The OECD

The work of the Organisation for Economic Cooperation and Development (OECD) is also notable. The OECD works to establish uniform policy standards related to the economic growth of its 37 market-based, democratic members. These 37 countries research, inaugurate, and share development activities across more than 20 policy areas involving economic, social, and natural resources. As a group, the OECD considers AI to be a general-purpose technology that belongs to no one country or entity. Accordingly, its members agree that its use should be based on international, multi-stakeholder, and multidisciplinary cooperation to ensure that its implementation benefits all people as well as the planet.

The OECD parses its work into two main categories, each of which has five subparts.

Its Values-Based Principles focus on the sustainable development of AI stewardship to provide inclusive growth and well-being opportunities that pursue humanitarian and fair goals. AI developers should make their work transparent and understandable to most users while ensuring the validity of its safety and security capacities, and they should be – and be held – accountable for the programs they create.

For policymakers, the OECD recommends establishing international standards that define the safe and transparent development of AI technologies that function compatably within the existing digital ecosystem. Policy environments should be flexible to encourage the growth and innovation of the software while protecting human rights and also limiting its capacity to be used for less than honorable reasons.

The European Union

On Wednesday, March 14, 2024, the European Union voted to implement its ‘Artificial Intelligence Act,’ the world’s first set of AI regulations. It took three years of negotiations, data wrangling, and intense discussions to achieve … “historic legislation [that] sets a balanced approach [to] guarantee [our] citizen’s rights while encouraging innovation” around the development of AI. The full EU Parliament is expected to vote on the new laws in April; there is no expectation of significant opposition to it.

Fundamentally, the AI Act is focused on limiting the risks AI presents, especially in certain situations. Some AI programming provides rote actions that don’t require intense analysis or oversight, so they don’t need excessively intense regulation, either. Other AI platforms, however, incorporate more sensitive data in their computations, such as those that use private biometric or financial data. Inappropriate use of such sensitive information could be disastrous for the person or people exposed, and the scope of AI technology has the capacity to expand that risk to a much greater scale. The AI Act requires developers to prove their model’s trustworthiness and transparency, as well as demonstrate that it respects personal privacy laws and doesn’t discriminate. Entities that are found to be non-compliant with the AI Act risk a fine of up to 7% of their global annual profits.

 

These are just three of the many governments and organizations working to gain control over the use of AI within their jurisdictions. Leaders in the United States are also focused on the concern. The second article in this edition of the Pulse looks at what’s happening in America as it, too, reels from the immense and growing impact of AI on virtually all of its systems and communities.

 

AI Regulation: Fears and a Framework

Pam Sornson, JD

March 5, 2024

Even while artificial intelligence (AI) offers immense promise, it also threatens equally immense disaster, at least in the minds of industry professionals who recently weighed in on the topic. In a March 2023 letter, more than 350 technology leaders shared with policymakers their concerns about the possible dangers AI poses in its present, unfettered iteration. Their concerns are notable, not just because of the obvious concerns raised by a technology that already closely mimics human activities but also because of the role these AI pioneers have played in designing, developing, and propagating the technology around the world. Requesting a ‘pause‘ in further development of the software, the signatories suggest that stopping AI progress until implementable rules are created and adopted would be beneficial. The break would allow for the implementation of standards that would prevent the technology’s evolution into a damaging and uncontrollable force.

 

Industry Consensus: “AI Regulation Should be a Global Priority”

The 23-word letter is concise in its warning: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” The stark statement embodies the fears arising among industry leaders as they watch unimpeded AI technology permeate more and larger sectors of the global community.

The letter was published by the Center for AI Safety (CAIS), a non-profit agency dedicated to reducing large-scale risks posed by AI. Dan Hendrycks, its executive director, told Canada’s CBC News that industrial competition based on AI tech has led to a form of ‘AI arms race,’ similar to the nuclear arms race that dominated world news through the 20th Century. He further asserts that AI could go wrong in many ways as it’s used today and that ‘autonomously developed’ AI – technology generated by its own function – raises equally significant concerns.

Hendrycks’ thoughts are shared by those who signed the letter, three of whom are considered the ‘Godfathers’ of the AI evolution: Geoffry HintonYoshua Bengio, and Yann LeCun (their work to advance AI technology won the trio the 2019 Turing Award, the Nobel Prize of computing). Other notable signers include executives from Google (Lila Ibrahim, COO of Google Deepmind), Microsoft (CTO Kevin Scott), OpenAI (CEO Sam Altman), and Gates Ventures (CEO Bill Gates), who joined the group in raising the issue with global information security leaders. Analysts with eyes on the tech agree that its threat justifies worldwide collaboration. ‘Like climate change, AI will impact the lives of everyone on the planet,’ says technology analyst and journalist Carmi Levy. The collective message to world leaders: “Companies need to step up … [but] … Government needs to move faster.”

 

Where to Begin: Regulating AI

Even before the letter was released, the U.S. National Institute of Standards and Technology (NIST) was already working on promulgating AI user rules and strategies in the United States. Its January 2023 “AI Risk Management Framework 1.0” (AI RMF 1.0) set the initial parameters for America’s embrace of AI regulation, referring to AI as “an engineered or machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments (adapted from: OECD Recommendation on AI:2019; ISO/IEC 22989:2022).”

Define the Challenge

Framing the Risk

The NIST experts parse AI threats into categories with other, similar national security concerns, such as domestic terrorism or international espionage. Styles of AI technology, for example, can pose

short- or long-term concerns,

with higher or lower probabilities of occurring, and

with the capability of emerging locally, regionally, nationally, or in any combination of the three.

Managing those risks within those parameters requires accurate

assessment,

measurement, and

prioritization, as well as

an analysis of relevant risk tolerance by users.

Assessing Programming Trustworthiness

At the same time, the agency is looking for ways that the technology is already safe to use. Factors that play into this assessment include the software’s:

validity and reliability for its purpose,

inherent security programming,

accountability and resiliency capacities,

its functional transparency, and

its overall fairness for users, to name just a few.

Generating Framework Functions

At its heart, the AI RNF 1.0 enables conversations, comprehensions, and actions that facilitate the safe development of risk management practices for AI users. Its four primary functions follow a process that creates an overarching culture of AI risk management, populated by the steps needed to reduce AI risks while optimizing AI benefits.

Develop the Protocols

Govern: The ‘Govern’ function outlines the strategies that seek out, clarify, and manage risks arising from AI activities. Users familiar with the work of the enterprise can identify and reconfigure those circumstances where an AI program might deviate from existing safety or security requirements. Subcategories within the ‘Govern’ functions apply to current legal and regulatory demands, policy sets, active practices, and the entity’s overarching risk management schematic, among many other factors.

Map: AI functions interact with many interdependent resources, each of which poses its own independent security concern. AI implementation requires an analysis of each independent resource and how it would impact or impede AI adoption and safe usage. This function anticipates that safe and appropriate AI decisions can be inadvertently rendered unsafe by later inputs and determinations. Anticipating these challenges early on reduces their opportunity to cause harm in the future.

Measure: This step naturally follows the Map function, directing workers to assess, benchmark, and monitor apparent risks while keeping alert to emerging risks. Comprehensive data collection pursuant to relevant performance metrics for functionality and safety/security gives entities control over how their AI implementation functions within their organization and industry as it is launched and throughout its productive lifecycle.

Manage: Activities involved in managing AI risks are outlined in the Govern functions, giving organizations strategies to respond to, oversee, and recover from AI-involved incidents or events. The Managing function anticipates that oversight of AI risks and concerns will be incorporated into enterprise actions in the same way that other industry standards and mandates are followed.

 

America is one of many entities working to establish viable controls over a potentially uncontrollable resource. Other nations, groups of nations, industries, and public and private companies are all engaged in creating regulations that allow for the fullest implementation of AI for the public good while reducing its existential threat to the world. The optimal outcome of all these efforts is a safe and secure technology that advances humankind’s best interests while helping it reduce the challenges it creates for itself.

AI Regulation: A Mandate for Management

Pam Sornson, JD

March 5, 2024

On October 30, 2023, President Joe Biden issued an Executive Order compelling the development of new safety and security standards for Artificial Intelligence (AI) technologies. By doing so, the President acknowledged the role AI is already playing – and will continue to play – in the nation’s economic, industrial, and social development. He urges interested and invested parties to protect the community and promote fair and reasonable ideals when adopting its use. The Order represents just one aspect of the current global push to gain control over the ethical use of AI.

 

Organizing Innovation …

Regulating AI technology presents a unique set of challenges that must be addressed if the digital asset is to be safe to use and add more value than hazard to the community. A myriad of individual AI opportunities coalesce into a constellation of concerns raised by the programming. Each iteration and issue individually requires a dedicated regulatory response; together, they present a massive mission to the regulatory agencies responsible for AI’s oversight. The challenge is to coordinate the global effort to put guardrails around the tech to optimize its assets without unnecessarily impinging on its capacities.

 

… To Enhance the Good …

Just a short list of adopted uses for AI reveals the scope and extent of rules needed to maintain its integrity:

Connectivity – Assets included in AI programming are already connecting services and their providers to create an enhanced team capable of more together than each offers individually. In the healthcare sector, as an example, AI-powered programs are already conducting triage functions, issuing preliminary diagnoses, and sharing critical health information with far-flung medical team professionals at a pace unmatched by traditional collaborative methods.

Energy Management – The ever-expanding ‘distributed’ energy sector is also embracing AI opportunities to improve performance and build in reliability. Traditional community-wide power systems used centralized power stations to direct energy resources to their customers. Over the past few decades, however, adding in-home power options (solar, wind, and geothermal) resulted in a reversal of power flow. These resources now feed energy into the shared grid, and owners experience reduced energy costs or even receive revenue checks for their contribution to the network supply. The consequence is a power system that is vastly more complex than the original version, and the new iteration requires much more hands-on control. Emerging AI programs promise to streamline and coordinate that evolution.

Logistics – The COVID-19 era demonstrated the critical role that logistics play in the global economy, as supply chains failed, leaving millions of people and businesses without the goods and services they needed. AI in this sector is revolutionizing supply chain management and control. Automated warehouses populated by robots that don’t eat or sleep now provide a significant proportion of the physical labor involved, while sensors and cameras track the movement of goods through the system from their original creation to their ultimate destination. The efficiency level rises impressively in AI-enhanced facilities, which promises even more innovation to further the adoption of AI programming into the industry.

Each of these sectors involves thousands of companies and millions of people, all of which can influence the integrity of the respective system. Without regulation, the actions of any one person or entity can generate disastrous consequences for the others.

 

… And Manage the Bad.

Another short list of AI realities reveals the threats that an unregulated resource poses:

Job Losses – Automation – using robots to perform functions previously managed by humans – has already caused millions of job losses. In 2023, 37% of businesses responding to a New York-based survey stated they had replaced human workers with automated, AI-driven programming. Customer service workers were at the top of the list to be eliminated, followed by researchers and records analysts. Almost half (44%) of survey respondents indicated that more layoffs were likely in 2024.

Disinformation – The term ‘fake news‘ became ubiquitous in the past decade as political entities sought to control voters using false information to influence their actions. Using AI to produce and share inaccurate or manufactured data creates fundamental challenges for every entity that relies on its accuracy and veracity to maintain its credibility. When taken to extremes, the false commentary amounts to propaganda, which some assert is the ‘world’s ‘biggest short-term threat.’

Security – Data security – keeping personal, corporate, and governmental information safely secured behind impregnable walls – has been top-of-mind in most industries for a long time, and vast magnitudes of rules and regulations have evolved to protect it. AI presents a novel iteration – it generates ‘new’ information that may or may not be accurate or truthful. The computer program that created this ‘new’ information isn’t ‘aware’ of or concerned about its veracity (computers don’t ‘think’) and treats it like any other data bit within its reach. Consequently, the responses returned by the program are only as accurate and reliable as its affiliated databases. When those aren’t reliable, the AI response won’t be trustworthy either. Researchers note that AI software can be ‘gullible’ and produce manipulated responses when fed leading or misleading questions. It is also corruptible. Programmers can manipulate AI’s function by ‘poisoning’ its databases with false data. The action trains the technology to respond according to those directives and not pursuant to legitimate AI functioning. Without focused interventions, AI has the capacity to perpetuate biases and inequities when those influences are already programmed into its information stores.

 

AI presents an infinite number of iterations and permutations that can be used for good – or evil. Without any sort of regulation on its use, its misuse poses a significant threat to virtually every corner of the global community. And, with so many entities now contributing to the AI universe, both legitimate and illicit, governing bodies are appropriately focused on putting guard rails on all AI efforts in an effort to maintain global stability and industrial reliability. The President’s Executive Order is one step toward gaining control over this emerging technology that offers so much promise for a healthier, more productive future for the world.

Generative AI: A Digital Double-Edged Sword

Pam Sornson, JD

February 20, 2024

The benefits of embracing artificial intelligence (AI) are apparent. Already, the software is providing previously unimaginable service to many industries, such as speeding access to healthcare, streamlining education processes, and resolving previously unsolvable problems. However, as bright as its promise appears to be, the programming also poses significant threats to the entities that rely on it for both work and life functions, including today’s typical consumers. Any person or company that enjoys the assets offered by AI should also be aware of its vulnerabilities so that they don’t become victims of the burgeoning AI crime sector.

 

 

A Wolf in Sheep’s Clothing?

Generative AI (GenAI) is a subset of the same programming that provides ‘traditional’ AI functions. They are both ‘trained’ software programs, meaning that they are each designed to pursue specific goals in relation to the also particular parameters of their developer’s strategy. Both styles scour billions of data bits to locate appropriate responses to user demands, and each provides data collection, analysis, and computing capacities that are far beyond those of humans.

However, the two AI aspects differ in one crucial way, and that difference is what makes GenAI so much more dangerous than the other:

Traditional AI offers responses based on the existing data within and the structure of its databases. It can only respond if that data is contained within its research resources.

GenAI, on the other hand, can create responses that are original to its search-and-create process. It takes existing information and spins it into creative narratives that are totally unique to it. It uses existing data as a learning template to develop new and original ‘thought.’ The challenge arises from the fact that it is often impossible to tell a human-generated piece of content from that generated by the computer. Consequently, since computers are not bound by the very human-mandated principles of ethics and morals, their products can appear completely valid and authentic even though they are rife with falsehoods and completely unreliable.

Simply put, GenAI can act as an independent ‘person’ by producing original content with the same sense of authority and integrity as that produced by its human counterpart but without adherence to the rules, regulations, and standards that bind that human effort.

 

Creative Chaos Ensues

The application of GenAI for less-than-honorable purposes is already rampant:

‘Deepfakes’ are images generated from real pictures that manipulate their truth into something it is not. This type of GenAI often combines data from a variety of sources to create a plausible, realistic ‘new’ version of those combined concepts. Nefarious operators often distort images of political figures (as examples) to reflect a depiction of their personal philosophy, thereby misleading their consumers into believing that the revised version is a truthful one.

‘Phishing’ offers another opportunity to distort or bury the truth from trusting users. Phishing activities typically use email or text prompts to entice consumers into clicking on links that appear to come from a trusted source and that offer a valuable and welcome service. However, those links actually direct the user into a ‘dark’ space on the ‘web where their confidential, personal information – birthdates, banking info, healthcare specifics, etc. – is quickly extricated and stolen.

ChatGPT is, perhaps, the best-known example of GenAI. This software produces human-like results that are often indistinguishable from those presented by actual humans. To users who are not aware of the concern, ChatGPT’s creative products are frequently accepted as valid and authentic, as if a person, not a machine, generated them.

There are many ways for nefarious GenAI to manipulate the community to achieve its developer’s intended goals:

Manipulating images also frequently violates copyright rules, as the developers rarely seek out the trustworthy source of their information. Organizations that use GenAI programming to pursue their proprietary ambitions might unknowingly run the risk of compromising another entity’s legally protected assets.

‘False’ images created with GenAI can also amplify existing biases and prejudices, making current social issues even more dangerous. A recent analysis of ~5,000 pictures created using text-to-image software (a form of GenAI) revealed that the images generated showed extreme gender and racial stereotypes. Unsuspecting viewers might believe that those destructive distortions are actually verified versions of reality.

As noted above, inappropriate disclosure of sensitive information is also problematic when the computer program doesn’t discern it or won’t conform to the boundaries designed to protect that data.

 

Harnessing GenAI for Good

GenAI still offers users tools to achieve their goals and ambitions despite the threat of misuse and exploitation marring its opportunity. By carefully learning and following directives for appropriate and valid GenAI uses, every entity can maximize the value provided by the software while reducing their exposure to inappropriate outcomes:

Use only publicly available GenAI tools. There are commercially available software packages that facilitate exceptional AI functions without exposing users to the challenges of the format. Many of these digital programs are designed to address the demands of specific industries, as well, so adopting the technology might also be a mandatory step to retain the company’s current market share.

Invest in customization services. Every organization operates differently from its competitors, and those distinctions are also often its unique ‘claim to fame’ within its sector. AI programming can be modified to both protect and highlight the existing presence while also adding new values and opportunities for consumers.

Add data feedback and analysis tools to ensure the software is – and remains – on target for its purpose. Data in all its forms is the true currency of society these days; corporate databases contain myriads of unexplored information that can be harnessed to produce even more success for the company. AI programs will use that newly discovered data to refine their functions, providing even more value to the enterprise.

 

Both traditional and generative AI offer tremendous benefits as well as potential threats to every organization. Accordingly, the process of adopting the technology and adapting it for proprietary use should be carefully mapped out and managed. Those companies that master the project can also gain significant advantages in a sector where such advances may still be unexplored, which was, after all, the purpose behind the development of artificial intelligence in the first place.

 

 

Instructional AI: Advancing Education’s Capacities

Pam Sornson, JD

February 20, 2024

Like virtually every other type of modern technology, the usage of ‘artificial intelligence’ (AI) has triggered great fear, effusive elation, and almost all emotions in between. However, also like other modern technology, AI’s bona fide impact on society is not fully understood as so many of its capabilities – real and potential – have yet to be explored. Despite the challenge of not knowing whether its threats are actual or imagined, AI is making inroads into many cultural and societal venues, including that of higher education. How is it in use today? How might it be used tomorrow? And is its presence in the classroom a boon or a bust for modern learners?

 

AI as a Ubiquitous Tool

Most people already interact with AI, even if they don’t know that’s the programming they’re accessing. The digital resource drives virtually all of today’s smart devices, using intuitive and insightful strategies to facilitate a myriad of services all through a single portal.

Mapping programs steer users through the physical world, offering directions, time-to-destination, traffic notices, and even restaurant and entertainment suggestions.

The name (a noun) of today’s most popular search engine is now also a verb, and many people use it to describe how they found their latest new gadget or problem solution.

Even customer service ‘providers’ are more AI today than they are human. The now ubiquitous ‘chatbot’ – literally a ro’bot’ that ‘chats’ with people – responds to most online inquiries and is often the only ‘human’ resource consumers have contact with when looking for answers to their concerns.

It’s most likely that even those who fear AI and its potential threats use the resource as a regular part of their typical day.

And that exposure to and use of AI – and its closely related cousin, machine learning – isn’t diminishing either. The technology has the capacity to improve itself, and many cutting-edge computer programs in use today are programmed to enhance their own internal functioning. The potential values offered by these ‘smart machines’ make them increasingly desirable as business assets, so investments in AI are growing at a notable rate.

‘Healthcare tech’ uses AI to capture data, generate reports, connect medical teams, and inform patients. ‘Telehealth,’ the delivery of medical advice over an internet connection, is becoming the most popular method for connecting with healthcare professionals. During the COVID-19 pandemic, the number of telehealth appointments rose from 5 million in 2020 to over 53 million in 2022.

‘Smart assistants,’ such as Siri and Alexa, now monitor home and office systems to modulate room temperature, lighting, door access, and more. In 2022, more than 120 million American adults were using these assistants at least once a month.

The use of digital payment portals is also on the rise. Most of today’s banks use AI technology to interact with all their customers, both individuals and businesses. The tech facilitates 24-hour access, online deposits, withdrawals, and other services and maintains monitoring capacities over billions of dollars of financial and other assets.

Analysis of how AI technology is growing as a fundamental corporate asset shows that it is also quickly becoming foundational to numerous existing and emerging industries. Many of these industries – and the countless companies that make them up – are embracing the AI opportunity in the post-COVID era to revise their economic foundation and that of their community.

 

Instructional AI in the Spotlight

Clearly, AI has been successfully embedded in both personal and corporate daily activities for some time, so it’s not surprising that many industries, including the education sector, have also adopted the technology into their daily activities. In schools, AI programs and applications are typically identified as ‘Instructional’ AI, and those, too, have been around for several years. Their uses span the gamut of educational functions:

In some cases, the tech is used to track student activities. A well-programmed AI service can track attendance, test scores, course availability, and other data-rich nuances of the learner’s experience to inform school admins about their progress and facilitate fine-tuning of their overall educational adventure.

AI is also proving to be extremely valuable in streamlining educational resources to meet the needs of each individual student. Data collected reveals where those learners are experiencing challenges and where those challenges are originating. Sometimes, it shows that the person needs additional support; other times, it demonstrates that the school has missed the mark for serving this particular person or class of people.

Perhaps the most common implementation of AI in educational processes, however, is its use in instructional design. The designers of programs, courses, and the classes that those comprise are constantly improving their resources to better meet student needs, with the goal of enhancing the learner’s absorption of the content to achieve ultimate success in their chosen subject matter. AI tools give these ‘education architects’ the capacity to deliver a deeply personalized course content that responds to the individual learner’s past performance, learning pace, and preferences. Further, in addition to structuring the course in a format more compatible with the students’ preferences, AI also facilitates adaptations to the system in real time. Data collected as the course unfolds informs the programming, which can then modify modules or lesson plans accordingly.

AI also offers an asset that’s already well-favored in the community: its gaming capacity. Game-based learning opportunities that are enhanced by AI can provide more individualized and flexible learning opportunities for virtually all students, which can also increase their likelihood of persisting to graduation and then finding the job and career that best suits their needs.

 

The use of AI as a learning tool is growing as more higher education institutions adopt it to service their myriad of programs and workflows. For students, the digital asset is proving to be an invaluable addition to their education, so long as they use it with integrity. For the higher education sector in general, AI also offers the opportunity to develop whole new avenues of courses and careers to meet the burgeoning demand for skilled AI technicians. At least at this first glance, AI is performing exceptionally well for the education community.

Re-Valuing Labor: Ousting Obsolescence

Throughout 2023, The Pulse explored how society values’ labor’ – how it recognizes and compensates the effort provided by workers for their employers. The research revealed that, in many cases, that value is set not by the actual work done but instead by the relative value of the person who performed it:

In too many cases, women and people of color are deemed to offer less value than caucasian men, even when they do the exact same work.

A significant percentage of global labor efforts are completely uncompensated, which suggests those ‘workers’ offer no value. Contrary to that thought, however, is the reality that their effort contributes significant value to the community by providing social and altruistic benefits while alleviating the burden on public funds

Sometimes, the workers exposed to the highest risks while on the job were also the lowest-paid. The COVID-19 pandemic clarified the exceptional value of the “essential worker,” whose ‘menial’ labor was determined to be vital to protect the health and safety of their community.

The advent of digital automation as a labor provider has also upended many work-world norms. In some cases, human workers were eliminated altogether in favor of a more accurate and speedier machine. (In other cases, however, the technology augmented the human effort, creating more value for the company.)

These realities reflect an entrenched perspective of the ‘value of labor’ as that has been defined by centuries of human development and evolution. Effort was assigned ‘value’ based on the time it took to complete, its relative simplicity, the personal characteristics of who was providing it, or the volume of product rendered within a specified period. That long-standing but haphazard ‘labor valuation’ practice has caused a wide and growing gap between the compensation paid for services and the actual value they provide to the community.

 

Outputs vs. Outcomes: A Critical Difference

However, the calamities of the past few years appear to have triggered a shift away from the norm of ‘work value’ being appraised based on who is doing it, not on the benefit it confers. The opportunities that have emerged through the COVID years – digitization, remote work, Artificial Intelligence – and the pressures generated by those phenomena are driving a re-imagination of how ‘labor’ is valued. Instead of using piecemeal time measurements or a rote count of widgets produced, industry leaders are now contemplating how to engage with all the assets offered by their employees to improve the company’s fortunes as a whole. Organizations of all types are now reviewing their current workforce deployment strategies to determine if they would fare better if they altered their traditional worker and occupational expectations.

Shifting Focus …

What’s becoming more apparent is that emerging workforce options can offer significant benefits to those entities willing to embrace them. Enterprises are taking a broader look at their workforce to determine if unexplored potential within that resource might serve the organization better. In many cases, jobs can be automated with technology, which frees the human worker to provide a more intentional service. In other cases, a re-analysis of the full spectrum of advantages offered by a particular occupation or worker can attain significantly more value when viewed through a ‘corporate benefit’ lens instead of a ‘payroll expense’ lens:

How would the company function if workflows were designed to expand on or augment employee talents and skills beyond their rote mechanics?

How can workers contribute more to achieving corporate goals rather than just adding numbers to corporate outputs?

How can the company leverage the outputs of its human resource  assets to improve overall productivity and profitability?

One possible response to each of these questions is to shift the corporate mindset away from bean-counting and toward value creation.

 

… to a New Corporate Reality …

As is usually the case, shifting corporate culture to embrace new opportunities requires a dedicated strategy that addresses both the old elements that need changing and the new elements that introduce revised expectations.

From an overarching perspective, the entire organization can focus beyond earlier metrics (cost savings or efficiency levels, as examples) and toward ‘value creation’ in general. What are the company’s most significant barriers to growth, and how can revising the effort of its workforce address those challenges better?

The company can also determine which elements are best suited for automated labor and identify where human effort may be wasted. In these instances, adding technology to do rote work frees the employee to contribute other value to the company. New technology might also augment that worker’s effort, increasing its value while reducing the cost to attain that enhanced benefit.

Enterprise leaders might also review each job description and activity to discover the highly valuable but often hidden ‘soft skills’ embedded within it. Most employees can be taught the manual skills needed to perform specific tasks, but their intellectual capacity to analyze and make changes on the fly is what makes them more valuable as contributing employees.

Organizations are embracing this ‘changed mind’ and creating jobs and occupations that facilitate both more flexibility and respect for the worker and also improved productivity for the business.

 

… That Embraces ALL Possibilities.

But they can’t stop there. Recent data reveals that the inequities that were so deeply entrenched in the workplace habits of the past have not been fully alleviated post-pandemic:

Women still need to catch up to men in their economic recovery after the pandemic. One reason they lag behind is that they frequently held those jobs that were most impacted by COVID-induced closures, including any job that required face-to-face exposure. Women also perform most of the community’s caregiving services (frequently unpaid work), and that burden increased during the coronavirus era.

Those without post-high school education credits are also still suffering economic depression caused by the pandemic. The workforce population with bachelor’s or higher degrees has grown by 6.9% over numbers posted in April 2020, while the group without that educational achievement is 3.2% smaller than it was at that time.

Color and ethnicity also remain factors in the employment sector. Unemployment for Blacks and Latinos stands at 5.1% now, while the white unemployment level is only 3.5%.

Both companies and their workforces are aware of the evolutions now occurring in today’s labor markets. New occupations are developing to replace those now deemed obsolete, while emerging technology and workforce philosophies are driving innovation in hiring and retension strategies for high-quality workers. The situation offers hope that, despite the losses and destruction caused by COVID-19, it did open doors – and eyes – to new ways to engage with and reward the human resources that truly manage the global economy.

 

Re-Valuing Labor: Reforms for Workers

Corporations aren’t the only entities that are questioning how ‘work’ should be valued in the post-COVID environment. Workers are also revising their relative merit within their employers’ organizations and realizing that they want and need more than just a paycheck from their jobs. Over 48 million Americans left their jobs in 2021, and over 50 million did the same thing in 2022, a development now known as the Great Resignation. Gartner labels the phenomenon driving the exiting workers the ‘Great Reflection,’ as they seek more meaning in how they spend their time; old-style occupational valuation makes them wonder if the ‘nine-to-five’ obligation is still worth their effort. Companies looking to retain their experienced workforce and slow the talent and human resource drain are now also seeking ways to respond to the increasing demand that ‘working’ should also afford ‘meaning.’

 

The Pandemic’s Short-term Impact on Workforce

The COVID-19 pandemic caused unprecedented changes in how the world works. That global health crisis triggered an equally global re-evaluation of what is truly important in life, as millions of people succumbed to the virus and millions more were compelled to move on in the face of those losses. In response to those revelations, thousands elected to change their work circumstances to better reflect what is truly meaningful to them. Many people are no longer willing to spend precious time working in occupations that provide a paycheck but little more.

Recent research reveals several reasons why there was a mass exodus of workers from every kind of occupation throughout the course of and after the pandemic:

Many felt unappreciated by their employers, who paid them less than they knew they were worth. In a Pew Research survey, 63% of respondents said low pay was a top contributor to their decision to leave.

Many respondents (also 63%) quit because their occupations offered no opportunity to advance their careers. They cited a stagnant job placement with little or no control over when or how they performed their work. That lack of control over work hours was particularly aggravating, especially in cases when the work itself was not time-sensitive. They determined that maintaining that unfulfilling and limiting employment status quo was no longer an option.

Coupled with the low pay was the lack of respect many workers felt while on the job. More than half (57%) of the survey group stated that their employers did not notice or recognize their full slate of talents and skills.

Others left because they felt overworked or underworked by their organization, had challenges obtaining child or family care, or needed benefits that weren’t offered (paid time off or healthcare insurance, for example). Still others decided it was as good a time as any to relocate to a new community better suited to their needs and tastes.

In all cases, workers determined they wanted a better life than their current job could give them and elected to move on despite the risks entailed in that process.

 

Impact on Industry

The mass resignation phenomenon obviously impacted employers and industries, too. Typically, companies don’t consider the potential problems that might arise if their workforce suddenly shrinks or if key workers elect to go elsewhere. Those that were unprepared often found that they could not respond appropriately to their customer’s needs without an adequate staff, which caused many businesses to fail.

Also, some industries were more affected by the phenomenon than others. Throughout the pandemic, the education, healthcare, retail, hospitality, and transportation industries experienced significant workforce shrinkage. In some cases, jobs just disappeared as public health overseers implemented ‘social distancing’ and other coronavirus safety measures. In other instances, occupations were permanently ‘retired’ as machines took over the labor. And in many circumstances, job openings were left unfilled because no one wanted to do the work. Certainly, today’s current workforce and employment trends are decidedly different than they were just three years ago.

Impact on Workers

Fortunately, the mass resignation episode did not also result in higher unemployment. Instead, many job seekers were able to leverage their skills and talents into positions that better matched their capacities and their preferences. Savvy potential employers, also very aware of the ongoing employment migration, had modified their open positions to facilitate ‘sustainable careers‘ for these new hires. They added benefits and other employee-facing resources that met the candidate’s newly elevated expectations. Additionally, many organizations adjusted the expectations of unfilled job openings to emphasize the ‘soft skills’ that enhance daily activities. This skill set includes analysis, reasoning, and decision-making abilities that extend beyond typical day-to-day duties. Workers who take these hard- and soft-skilled jobs often make better money, have more flexibility in their work conditions, and can actually have an influence on the fortunes of the enterprise. Not surprisingly, these ‘enhanced’ occupations are very attractive to workers who have been asking for what they offer.

 

The Pandemic’s Long-term Impact on Workforce

New data suggest that the Great Resignation is receding or is over, as the ‘quit rate’ returned to the average rate experienced in 2019. But that doesn’t mean the work world has returned to its former self. Instead, many organizations have embraced the expectations of their workers as the ‘new normal’ and are taking steps to provide a truly ’employee-friendly’ work environment. They are adding to every employee’s hiring package benefits and perks that were previously reserved only for the upper echelon of leadership. And, as workers return to work (both in the office and remotely), they gain substantially better occupational situations than those they left behind. Many companies now offer, as a matter of course:

mental heath support (often in conjunction with better physical health benefits, too),

financial support in the form of enhanced 401(K) plans, coaching, and even tuition reimbursements for upskilling workers and

additional resources for child and family care.

Companies are also reviewing how their work gets done. Hybrid positions where the employee works remotely some of the time are becoming more familiar in occupations where location isn’t important. The candidate pools are changing, too, as leaders look for talent in the diverse communities that had been overlooked in the past. And everyone is adding technology to perform rote and routine tasks and to augment intellectual inputs made by human workers.

 

From all perspectives, it appears that the COVID-19 pandemic and its fallout forever changed how the world works. Looking forward, the businesses that will achieve the greatest success will also be the ones that give their employees the attention and consideration they seek.

 

IIJA Infrastructure Investments and Projects

The Infrastructure Investment and Jobs Act (IIJA) went into effect in 2022 and has been stimulating projects and proposals across industries ever since. Its intent is twofold:

To repair the aging infrastructure that is failing after decades of excessive wear and erosion, and

To build a national workforce capable of maintaining the new foundation while also building capacity for continuing evolution and growth.

The Bipartisan law intends to address several pressing national needs through these investments:

Ensure a safe and functional physical infrastructure and a robust economic foundation for future growth;

Provide all communities with new resources for building their specific economies;

Reduce the inequities that remain so solidly in place in many long-entrenched social systems and

Provide training and employment for millions in both traditional and newly emerging occupations and careers.

The scope of the law is immense. The opportunities it promises for further expansion and advances across all industries are unlimited.

 

One Broad Approach. Many Narrow Targets.

Repairing What’s Broken

In addition to repair costs, the law allocates over $550 billion over eight years toward new developments in digital connectivity, energy systems, and transportation networks to accommodate today’s burgeoning demand for these services.

Projects to expand internet assets through new ‘middle-mile’ infrastructure (the digital bridge that connects data to the users that seek it) will receive some of the $65 billion allowed for that purpose through the Broadband Equity, Access, and Development (BEAD) Program.

A cleaner, more efficient energy industry is also a target of the IIJA. It will invest $65 billion in energy innovation, carbon management, farming and forestry developments, and cleaner transportation systems. Just one goal for this aspect of the bill is to reduce the country’s reliance on fossil fuels and its second-highest-in-the-world CO2 emissions. The United States currently emits 14% of the global CO2 exhaust each year, behind only China (39%).

It’s also spending over $100 billion on climate change initiatives and electric grid restructuring in response to the regional flooding, storms, and power outage catastrophes that have occurred in recent years.

These ‘repair and rebuild’ investments are responsive to decades of spending cuts that have left much of the nation’s foundation in tatters.

 

Building What’s Needed

Not insignificantly, the IIJA is also looking to develop ‘best practices’ within systems that impact everyone. The law sets aside funding to establish ten Centers of Excellence affiliated with a planned ‘National Center of Excellence for Resilience and Adaptation.’ This network of government agencies, industry experts, and higher education think tanks will focus its efforts on improving the nation’s transportation and logistics resiliency in the face of worsening weather and climate events. Each Center will receive funding for research and development of resilient transportation, energy, and climate change projects that respond to circumstances within its region. The goal is to prevent the damage and loss experienced in the past by communities living in underdeveloped areas that lack these resources.

Notably, the law’s originators see the administration of each Center of Excellence to be managed (potentially) by a local higher education institution. As coordinators, the schools will administer the $500 billion in new money for ‘surface transportation’ projects, and the schools themselves could also be ‘centers of excellence’ within their region, which would make them eligible for IIJA project funding, as well.

That administrative role is crucial, too, to the success of the overall IIJA project. Within the transportation sector alone, there are 11 key programs established to launch technology deployments and perform advanced research. The work is designed to result in a “future-proofed transportation system that is data-driven and evidence-based”:

Strengthening Mobility and Revolutionizing Transportation (SMART) Grants ($1B, new) to advance smart urban technologies and systems to improve transportation efficiency and safety.

University Transportation Centers (UTCs) ($500M, expanded) – to advance transportation expertise at two- and four-year colleges and universities.

Advanced Transportation Technologies and Innovative Mobility Deployment (ATTIMD) ($300M) – for developing advanced transportation technologies and innovative mobility deployments.

Advanced Research Projects Agency-Infrastructure (ARPA-I) (new) – A new agency focused on leveraging science and technology to address efficiency, safety, and climate goals for transportation infrastructure.

Open Research Initiative (authorized at $250M, new) – to manage unsolicited research pitches that address unmet DOT research needs.

Nontraditional and Emerging Transportation Technology Council (institutionalized) – the now legally entrenched NETT Council will identify and resolve gaps associated with nontraditional or emerging transportation technologies.

Transportation Research and Development Five-year Strategic Plan (renewed) – The USDOT Research, Development, and Technology Strategic Plan guides Federal transportation research and development activities.

Smart Community Resource Center (new) – incorporating resources from the USDOT, Operating Administrations, and other Federal Agencies to develop intelligent transportation systems

Joint Office of Energy and Transportation (new) – A DOT and DOE (energy) partnership to study, plan, coordinate, and implement issues of joint concern.

Transportation Resilience and Adaptation Centers of Excellence (TRACE) Grant Program (Authorized at $550M, new) – See above.

In the DOT, more than $4.5 Billion in Research Activities are now and will continue to be focused on a range of critical priorities, including vulnerable road users, the impacts on roads from self-driving vehicles, reduction of driver distractions, and emerging alternative fuel vehicles and infrastructure.

 

With new funding available for almost countless projects that embrace thousands of neighborhoods and millions of people, the IIJA promises to add significant value and better living opportunities for all its communities. To achieve the law’s fullest fruition, however, the effort will also need a workforce development pipeline that delivers precisely trained labor to build and maintain both existing and new resources. For that purpose, the IIJA is looking at the country’s network of community colleges as its training and credentialling foundation. And for that to become a reality, those schools – all 1,038 of them – will have to reidentify themselves as workforce development hubs. That project, too, is in progress. Read on.

The IIJA, WFD, and the CCCs

How and where to allocate available funding is a critical element of any infrastructure project, especially if there is more than $1 trillion to be distributed. The Infrastructure Investment and Jobs Act of 2021 (IIJA) was specific in its focus when determining how to best use its $1T asset value. Four central U.S. government departments – the Departments of Energy, Commerce, Transportation, and Labor – are the primary beneficiaries of the funds, which are to be used to improve and innovate foundational community resources as well as develop the workforce that will keep those services working and in good condition. The ‘workforce development’ aspect encompasses more than just providing funding to pay workers. It also covers the costs of the incremental steps needed to get those workers ready to go: policy development, implementation strategies, and educational reorganization are all necessary to ensure the resulting labor force is well-trained for the work it is expected to perform. Consequently, the IIJA is facilitating an overhaul of the higher education sector to build the country’s future workforce training programs.

 

 

Demand Drives Distribution

The IIJA doesn’t specify how training should occur or who should be doing it. However, one logical choice to provide those services would be training institutions already embedded in the community: the nation’s community colleges. These local academic and technical schools are already on the front line of the labor development initiative. With IIJA funding, they can adapt existing programs and build new ones to train students for both current and future occupations. And a vast array of new jobs will emerge as the economy absorbs modern labor innovations, such as artificial intelligence and automation. Across three federal agencies, almost $500 M is allocated for training purposes:

The Department of Energy (DOE) has $160 M available for both career skills training and to expand its network of energy engineers through the development of more Industrial Assessment Centers.

The Department of Labor (DOL) has $50 M in grant funding through its Strengthening Community Colleges Training program.

The Department of Transportation (DOT), tasked with perhaps the most significant aspect of the bill, will share up to $280 M over five years for training the workforce needed for its Low and No Emissions Bus Program. Additionally, the Bipartisan Infrastructure Law (BIL), passed in November ’23, continues existing DOT funding for ‘surface’ transportation projects and advances funding appropriations for supporting services through 2026. These funds will add resources – and workforce – to ensure the nation’s air space, highways, railroads, and commercial waterways are safe for use (especially in light of climate change), modernized to optimize digital resources, and equitable.

Each state is empowered to spend its share of the federal opportunity as it sees fit, according to its unique industrial, social, and educational challenges. However, the national legislation does have some caveats about how at least some of the money must be spent. Eligible workforce development projects funded by the BIL, for example, must meet four primary criteria:

Increase women and minority participation in the workforce;

Fill workforce gaps – provide workers for jobs that are currently unfilled;

Add digital and related training elements to support emerging transport realities and

Attract new investment opportunities to incorporate new revenue sources into the economy.

With these parameters in place, Governors and other workforce development leaders are free to fund (among many options) student tuition sources, apprenticeship developments, collaborative organizations, and outreach activities that unite the entire community – schools, businesses, industries, and government – behind the workforce development initiative.

 

 

States Select Strategies

At the state level, where the funds will be spent, stakeholders must determine not just where to invest but also the processes necessary to ensure those investments pay off. Analyses of local, regional, and state-level economies will suggest where these processes might begin:

Addressing the most pressing community needs – food scarcity and lack of housing are often caused by insufficient or non-existent work opportunities;

Clarifying relevant stakeholders and their potential for collaboration;

Enhancing existing workforce development activities and practices, or

Filling existing workforce needs and gaps.

These are just four of the many challenges that can be remediated with BIL funds.

 

 

Enlisting Educators

At the heart of the workforce development project is the training facility. Schools designed to provide occupational and vocational education – including community colleges, technical schools, trade schools, etc. – have many options open to them that can be funded with IIJA dollars:

Staff upskilling and training – the pandemic demonstrated that today’s education tools are much more varied than books and paper.

Professional development – see above.

Incorporating internships into existing workflows.

Partnering with community members to facilitate apprenticeships.

Outreach to bring more invested parties into the conversation and the project.

The state governments that are tasked with allocating these funds can streamline their decision-making processes by creating and pursuing a goal set that builds on existing resources:

They can leverage current educational capacities by understanding what each program offers, tweaking it to fit needs better, or expanding it to serve more people.

They can use existing data in conjunction with emerging information and digital technologies to clarify precisely where demand lies and solutions might be found.

They might set up a governing body to oversee, from a 30,000′ level, the activities, outputs, outcomes, and results of IIJA investments. Interim reports can reveal where things are on target – or not.

Engage with their industrial community, especially the businesses and organizations that are already invested in the local community. Businesses thrive with well-trained employees, so business owners should have some influence on what it takes to create that human resource.

 

The IIJA is funding America’s renewed infrastructure framework by focusing its resources on the civic, social, and industrial opportunities that benefit the country as a whole. The ultimate goal of the endeavor is to enhance and bolster the nation’s workforce to ensure the economy has the labor it needs to maintain stability and add growth. Facilitating that process are the country’s middle-skill schools – the trades, unions, and community colleges – that are evolving into the workforce development hubs that America needs to achieve these ambitions.

 

 

The Infrastructures Bill’s Potential Impact on the EWD Sector

Deep in the heart of the Covid pandemic, America’s national leaders did an extraordinary thing. On November 6, 2021, they passed the Infrastructure Investment and Jobs Act (IIJA), “a once-in-a-generation investment in our nation’s infrastructure and competitiveness.” The Bipartisan Act commits over one trillion dollars to repairing America’s decaying infrastructure while building new foundational resources to meet 21st-century demands. The new law will address two critical national concerns at once:

the repairing of the country’s substructure systems (railways, bridges, highways, etc.)

while also generating the jobs and business opportunities that will ensure a stable economy.

Embedded in its provisions are mandates requiring equitable practices across all industries so that the nation’s residents have the employment, income, and work opportunities they need to accomplish their personal goals. The IIJA is an attempt to address many or most of the complex issues America faces today while also adding resources to expand the country’s vast potential.

 

Repairing Aging Systems

Studies conducted over the past several years clearly reveal that the nation’s infrastructure systems are decaying. Much of that foundation was built decades ago, and it is suffering extreme distress because of the population explosion that occurred in the intervening years.

According to government reports, 20% of America’s highways and 45,000 bridges are failing. The roads require repaving at the least, and many require additional substructure repair to retain their usability into the future. Bridge inspections show cracks, separations, lost connectors, and other signs of deterioration. In almost all cases, obsolete materials and extreme overuse are causing the problems.

The airports are also showing serious signs of wear and tear, as are the country’s ports of entry. These assets play a crucial role in the nation’s economy, underpinning its extensive network of supply chains.

Public transport options are also suffering from years of neglect and deferred maintenance. America needs at least 24,000 new buses, 5,000 metro cars, and thousands of miles of train tracks to serve today’s population.

Public water systems are also in disarray; many still use life- and health-threatening lead pipes, and studies show that as many as 10 million homes are without safe drinking water.

The country relies on each of these systems to maintain its economic activity and provide healthy, happy communities for its residents. Further, in many cases, providing these resources will alleviate much of the inequities caused when transportation options are few and far away, as is the case in many of America’s ethnic communities. The IIJA investments promise a safer, more adaptable infrastructure foundation that upgrades its capacities while adding resources to address emerging demands, opportunities, and realities.

 

Building in New Resources

The IIJA also looks ahead. The IIJA aims to spend much of its investment dollars on building – at a national level – the resources needed for every local community to thrive and compete economically.

Digital resources are driven by the internet, and without that asset readily available, communities suffer. This was just one significant fact revealed during the pandemic. Thousands of remote communities lost contact with their neighbors and support services when communications channels were only available through an internet connection.

The realities of climate change are also compelling a rethinking of how systems should work. The IIJA invests heavily in electric vehicles and the network of public charging stations that will keep them running. This specific project serves two purposes: it will reduce the country’s negative impact on the climate while also increasing manufacturing employment opportunities. The rebuilding process will include accommodations for current and future threats caused by floods, storms, drought, etc., and the enhanced, ‘greener’ public transportation system will eliminate millions of tons of greenhouse gas emissions.

And clean, reliable energy itself is a central element of the two-year-old law. The Department of Energy reports that power outages across the country cost its citizens as much as $70 billion each year. In areas where extreme heat is happening more frequently, the number of lives lost to climate change is also growing. The nation’s electric grid is in serious need of an overhaul and modernization effort; the IIJA intends to ensure it gets that attention.

With funds directed at repairing failing systems while building in new resources, the IIJA promises to bring America’s foundation back to its former glory, a model for the world on how a booming and free economy thrives.

 

Creating Economic Stability

For many years, the American economy has been tilting toward the wealthy in terms of growth opportunities and reductions in tax contributions. The IIJA will remedy some of these inequities by improving publicly available resources in previously underserved places, typically in communities of lower economic status. The infrastructure investments themselves will have an impact – those efforts routinely represent positive contributions to local, regional, and national GDP and employment numbers. But where the strategy might be most helpful is its intention to hire and employ the millions of workers needed to bring its goals to fruition.

And to hire that labor force, those workers must first be trained and ready to work. According to the Brookings Institute, a nationwide, all-encompassing training effort will require inputs from every element of the economic sector. Businesses, industries, governments, and educators must all work together to develop occupationally focused education systems needed to sustain the IIJA’s upward trajectory. Those experts also suggest ways to accomplish these lofty goals. They suggest that the country should:

Increase investments in training programs, including upskilling and reskilling options. At best, today’s existing workforce training resources are only moderately adequate because most do not yet incorporate the digital aspect of the emerging workforce environment.

Utilize more work-based learning options, including apprenticeships, internships, and union participation. Very few school-based programs can match the efficacy and efficiency that come from on-the-job training.

Embrace diversity as an economic asset. Too many people are left out of the workforce for no other reason than their ethnicity, heritage, or even sexual orientation. The Brookings leaders assert that those human resources can provide immense value to their communities both as workers and as leaders. Adding their capacities will result in improved economic outcomes across all segments of the population.

 

Passage of the IIJA was a noteworthy accomplishment because it aims to both rebuild America’s aging infrastructure and also construct a future economy that supports a successful future for everyone. This ‘once in a generation’ investment will surely benefit not just this generation but all of those yet to come.

 

Artificial Intelligence and the Emerging Workforce

The ultimate impact that artificial intelligence (AI) will have on the emerging workforce has yet to be determined. The technology itself is still too new, and the few released iterations of its current capacities only suggest what added services and benefits it may eventually provide. Experts note, too, that the digital science poses significant threats if its capabilities are used for nefarious purposes; present-day policies and safeguards do not yet have the capacity to police these concerns.

What is known is that there is an almost unlimited number of use cases for AI wizardry and that its adoption will upend virtually every aspect of industry, policy, and society. Already, emerging AI applications are changing how the world works and the direction its industries are moving. Maximizing the values offered by AI will require sophisticated knowledge and training. Preparing the workforce to both master existing technology as well as embrace future innovations will be a momentous task. Designing, developing, and implementing that task is the next step in the AI adoption process.

 

Generative AI – What It Is and Why It’s Becoming so Popular

Stated very simply, ‘generative AI’ amalgamates several different algorithms to create and process digital data into meaningful, useful information – ‘content.’ In this case, ‘content’ means the text, imagery, sound, and other computing capacities afforded by today’s computers. With AI, multiple coding techniques combine to present the varied data contained in each separate program into a cohesive, intelligible whole. The AI program scours billions of data bits in millions of databases to find information related to the user’s posited question, then organizes and presents them in a way that responds best to the inquiry. The result is a more complete response pulled from a more comprehensive range of communication tools. For example, asking the ChatGPT ‘bot’ (short for robot) to suggest options for a toddler birthday party results in a long list (10 items) of things to do, from choosing a theme to engaging parent participation. AI can also generate an essay about geology, analyze medical records for more accurate diagnoses, and adapt manufacturing prototypes into higher-functioning working models. These are just three among – literally – millions of other AI-driven services.

Because AI technology moves so fast and is so comprehensive in its delivery, it can – and does – outperform human actions in many, many tasks. Consequently, many companies have adopted the science to perform routine tasks within their organizations, a choice that often displaces the worker who had previously done that work. It is because of its swifter, more accurate performance capabilities that AI poses such a threat to the livelihoods of millions of workers worldwide. If a company can achieve its goals faster, with more accuracy, and at less cost, then making the shift to AI over human labor is a sound business decision. But what does that mean to the now-unemployed laborer?

 

AI and Today’s Workforce

AI has already displaced millions of workers, in part because the recent pandemic prohibited close contact between people, which caused the closing of millions of businesses. The global health crisis drove more than eight million American workers to ‘change’ their occupations, either by finding new work or by leaving the workforce altogether. In many cases, the jobs they left were subsequently ‘automated’ – taken over by an AI bot that was programmed to perform those specific functions. These occupations are often ‘routine’ duties that don’t require any creative inputs, such as data entry, product assembly, or even commonly occurring responses to consumer inquiries. Their automation reduced both virus exposure and payroll for many organizations.

Further, it appears that the coronavirus only accelerated the transition from human to technical ‘worker.’ The World Economic Forum predicted in 2020 that more than 85 million jobs would be replaced by AI technology by 2025 and that those occupations would straddle many industries:

Customer service representatives, vehicle drivers, and factory workers are among the ‘blue collar’ jobs that will slowly disappear.

Computer programmers, paralegals, and travel advisors are also on the block, even though that type of work itself requires more intellectual input. AI can be programmed to provide those capacities.

‘White collar’ occupations are not immune, either. The duties performed by financial traders and research analysts are also within the scope of operations facilitated by AI technology.

Clearly, AI-driven automated services are already taking over elements of the labor force and leaving the workers who had held those occupations without jobs. One facet of the AI evolution will be to find new occupations and opportunities to replace those lost employment options.

 

AI and Tomorrow’s Workforce

Fortunately, governments and industries are already discussing the potential pathways that might develop as AI continues to evolve. In the U.S. Senate, Majority Leader Chuck Schumer (D-NY) introduced his “SAFE Innovation Framework” in the summer of 2023, a plan that safeguards worker security within the greater context of the expansion of AI technology. That Framework is presumed to become part of the effort by the White House to develop a National AI Strategy that will outline how the country will adapt to the occupational and economic impacts of the transition to AI. Built into it are mechanisms that will also address the inequities of today’s fractured society. Again, the opportunities posed by this technology are almost unlimited.

However, industry experts aren’t as concerned about the immediate impacts of AI. They assert that today’s AI models aren’t as clever as the media presents them and that they are often hyped beyond their actual capacities. Many of those commentators also note that the programming can generate as many jobs as it takes out, if not more, which would provide the job opportunities needed to maintain a stable economy. Like any other technology, AI requires program updates, oversight, maintenance, and management, all roles typically held by human workers. There would be more of those as the technology expands into new arenas. They also add that AI will augment existing work, enhancing and improving the impacts of human inputs to maximize outputs and outcomes.

 

While no one knows where the global embrace of AI is headed, the ultimate goal should be for both industry and humanity to thrive during and after the transition into a fully AI-invested community.