AI in Industry: An Evolution in Progress

A recent Deloitte survey of over 2800 corporate directors and C-Suite leaders reveals that Artificial Intelligence (AI) is making significant inroads into the workings of thousands of national and global enterprises. Representatives from six industries and 16 countries offered their insights and experiences about how their organizations are using the technology. Their observations illustrate AI’s broad reach and deep impact on their company, community, and industry.

Adoption Across the Industrial Board …

Just one reason why AI—and more specifically ‘generative‘ AI—has been embraced as swiftly as it has is its capacity to solve enterprise-focused challenges that had previously been difficult or impossible to manage. Access to almost infinite levels of data gathered across multiple databases facilitates a more comprehensive analysis of an issue than has ever before been available. Business leaders are seeing that potential and turning to the tech to gain insights and guidance they previously could not find. There are opportunities for AI-empowered actions in almost every industry, as is demonstrated by a recent report on the burgeoning number of AI use cases:

Retail

Retail organizations are using AI to provide several services that are critical to their success.

‘Bots’ now respond to a high percentage of customer service calls, directing callers to the department best suited for their inquiry.

The tech is also enhancing the shopper’s experience by providing more detailed and appropriate suggestions based on their input and history.

Logistics and Travel

Businesses that provide transport or cater to travelers are also embracing AI as a service-enhancing tool:

AI engines can find more and better transport options, whether for an individual heading across the state or a shipment of goods moving across oceans.

AI programs and embedded sensors combine to provide minute-to-minute oversight of supply chain products, from the moment they enter the production cycle through to their delivery to their ultimate user. Tracing the passage of goods through service centers and across international borders has never been easier.

The software also informs systems developers of potential bottlenecks that can threaten delivery times early enough to prevent that gaff from occurring.

Financial Services

‘Money’ isn’t always cash these days, and AI is assisting billions of people to access the financial resources they seek, regardless of the currency they’re using or the purchases they’re making.

Banks are streamlining their offerings to accommodate their increasingly tech-savvy financial customers, often providing personalized planning tools designed for the unique person or entity in question.

Insurance companies are using AI to evaluate claim data, establish legitimate claims, and uncover potential or actual fraud cases. They’re also streamlining their policies to better reflect actual risk levels based on AI-enabled risk assessment capabilities.

Energy

Both traditional and emerging energy resources are fine-tuning their activities based on insights gleaned from AI programming.

Traditional energy providers use the technology to improve efficiencies within their plants and systems, often by automating services and using sensors to track metrics, performance, maintenance, and other relevant elements.

They’re also using AI to facilitate and track the growing inputs of renewable resources into the nation’s power grid.

Healthcare

The healthcare services spectrum is perhaps the most invested in AI.

AI is proving invaluable as a tool to streamline administrative systems to make them more efficient and effective.

It is also connecting medical teams with emerging data that is relevant to their shared patient. With each specialist and team having virtually instant access to developing healthcare needs, the patient can receive the best possible care for their particular condition regardless of whose office they happen to be in.

The pharmaceutical industry is also using AI to improve its services to the healthcare industry. Automated software eases data collection and analysis of the enrollment in and the running of clinical trials, ensuring their proper execution according to industry standards. The resulting information informs drug developers of needs, threats, and other relevant factors impacting the future of a potential medicine or therapy.

Even in its current, relatively raw, and unregulated state, the use of AI is gaining significant ground in almost all industries to perform an ever-growing list of services and occupations.

… For an Almost Unlimited Number of Purposes

While its popularity for automation implementation and control continues to grow, many companies report using their AI resources for one or more of three specific functions. The Deloitte report notes that most early AI adopters focused their investments on improving corporate efficiencies, increasing productivity, and reducing costs. Of those survey respondents:

56% reported that their AI investments were making their organization more efficient, while

35% reported their costs had shrunk because of the technology.

Almost one in three (29%) reported they experienced enhanced product values and services as a result of their AI implementation.

Other surveys show that companies are using the technology to perform a myriad of services beyond achieving better efficiency or reducing costs:

AI programming can oversee the inner workings of almost any digital system, so its capacity to optimize website reliability and uptime for example is unmatched. The AI overlay can detect potential site or data disruptions before they cause problems, and its monitoring capacity ensures that all elements of the organization remain in sync with corporate goals and initiatives.

AI’s predictive maintenance capability is saving companies money, too. GE and Rolls-Royce, for example, are using it to analyze aircraft engine performance to both catch wear-and-tear issues before they become failures, as well as to track exhaust metrics and other relevant environmental concerns typically found in the aeronautics field.

Workforce optimization is another industry aspect being transformed by the technology. To both enhance productivity and reduce costs, many companies are using AI to manage workforce scheduling duties by incorporating into the analysis factors such as employee availability, worker skill sets per project, and customer traffic. Small and big organizations, including Target®, Costco®, and Starbucks®, use AI to ensure they are optimizing their workforce metrics as well as keeping their customers happy.

 

While still a relatively new resource, both the current capacity and future promise of AI as a business tool can’t be underestimated. It offers unmatched opportunities for growth and, when used properly and safely, may solve some of the world’s most intractable problems.

 

AI Adoption in America: What, Where, and Why

Pam Sornson, JD

April 2, 2024

A recent Burning Glass Institute report (BGI) analyzed which U.S. regions were most prepared to embrace Artificial Intelligence (AI) as an economic development tool, which many consider to be an indicator of future economic growth. Yes, almost every region of the country would benefit from utilizing the more sophisticated technological base, but most have not yet invested in the foundational infrastructure to support that digital evolution.

The BGI analysts compared the occurrence of legacy tech skills versus AI-based skills – what BGI calls ‘Frontier Skills” – in communities across the country to determine which geographical area would see the most AI-driven expansion, both in its workforce and its economy. Their findings were sometimes surprising:

Not all digital skill sets are the same;

Industries evolve differently depending on their location and local resources, and

Not all industries lend themselves to an early or comprehensive adoption of the still unpolished computing opportunity.

While AI resources are advancing across all regions, only a few are truly prepared to maximize the opportunities it presents right now.

 

What Needs Doing: Legacy Expertise versus AI Frontier Skills

Fundamentally, AI and legacy skills match at the most basic level. Every digital tool – AI or otherwise – needs to be:

programmed for launch and then reprogrammed over time as needs evolve;

continuously managed to ensure full functionality, and

secured to ensure no inappropriate intrusions or actions can threaten its performance.

Companies using technology in any capacity typically have an IT department to manage these functions and maintain their productivity and safety. Further, as digital technology permeates more elements of the industrial complex, there will always be demand for these types of skills.

AI programming, however, requires a different set of skills over and above those fundamental actions. In addition to simply reliably running its program, AI software also adds services not found in non-AI tech:

Machine Learning incorporates neural networks, a foundational database structure modeled on the workings of the human brain, to facilitate ‘deep learning’ programming that can ‘read’ disparate data types like images, audio, and text to discern insights and make predictions. The neural network exchanges data across its nodes to ‘learn’ from other information caches, find and fix mistakes, and improve its functionality without additional human intervention. Over one-third of patent submissions in the past ten years contain a ‘machine learning’ (ML) capacity, indicating its popularity as a digital business tool.

Computer vision is another AI-related technology that requires upskilled tech training. Computer vision facilitates the software’s capacity to ‘see’ the data it’s aimed at and derive and act on the information gleaned from those sources. This AI technology collects images of environmental elements using cameras, sensors, and algorithms. It then identifies factors that indicate locations, threats, and other relevant elements to inform the AI system’s ‘decisions.’ This technology is a critical element of a ‘self-driving’ vehicle.

Natural Language Processing (NLP) is also an AI component. It can be ‘rule-based’ (driven by programming specific to the entity) or ML-based (driven by both rules and the results of countless inquiries and searches). NLP seeks to understand the meaning of text and voice inputs so it can respond to both written and oral data. Its use in ‘chatbots,’ robots programmed to respond to written inquiries, and ‘digital assistants,’ like Alexa and Siri, has revolutionized how many people use their digital devices.

The demand for AI-specific programmers is large and growing, with one in three companies saying they can’t move forward with AI adoption because they lack the technically skilled workforce needed to do so. However, those current programmers with computer science degrees and a mastery of logic, reasoning, and problem-solving can attain AI skills by pursuing degrees within that specific field, assuming they can find a school that offers the training. They’d be well advised to take that path: companies that have already implemented their AI strategy are reporting their intentions to increase their investments – and consequently, their Frontier Skilled workforce – in 2024 and beyond.

 

Where AI Skills Are Most Concentrated

The BGI report analyzes Frontier Skills capacities in metro areas based on the size of the community, with large ‘metropolitan statistical areas’ (MSAs) comprising cities with 25,000+ tech workers, medium-sized MSAs with tech workforces numbering between 5,000 and 25,000, and small MSAs that are home to 5,000 or fewer tech workers.

Not surprisingly, those regions and urban metropolises that have already invested in tech- and data-enabled economies are leading the country in their AI adoption processes, although in many cases, the capacity for local industries to embrace AI also impacts its adoption rate.

Three large MSAs – Seattle (first), San Jose (second), and San Francisco (third) – lead the country in Frontier Skills concentrations due to their underlying foundations of ‘technology’ as an industry in and of itself. Los Angeles-Long Beach-Anaheim ranks 8th on this list.

Given their histories as ‘tech-heavy’ economies, San Diego, Austin, Boston, and New York are also high on the list of large MSA early adopters.

Notably, while the Washington D.C. MSA is home to one of the largest tech-based workforces in the country, its industries are mainly defense and government contracting, which are typically based on legacy technology. Of the 27 large-sized MSAs identified by the BGI, Washington D.C. ranked 21st, behind less obvious contenders Detroit (18th), Kansas City (19th), and Philadelphia (17th).

Utah’s burgeoning ‘Silicon Slopes’—Provo-Orem (1st), Ogden-Clearfield (10th), and Salt Lake City (3rd)—have propelled the region to the top of the mid-sized MSA category.

Surprisingly, Fayetteville-Springfield-Rogers Arkansas is number 2 on the mid-sized MSA list due mainly to the tech-forward presence of Walmart. Walmart has been investing in advanced technologies for years, so its early adoption of AI is not unexpected.

MSAs actively growing in population are not also enlarging their Frontier Skilled workforce. Miami, Houston, and Dallas lag in the bottom half of the large MSA list, at 14th, 26th, and 22nd, respectively, primarily because they’ve not yet developed a dedicated, tech-focused workforce.

Overall, across the four geographical regions—West, Midwest, Northeast, and South—the West’s workforce dominates the country with its Frontier Skills concentrations, while the South lags behind the rest of the nation.

 

The BGI document reveals how America is managing the deluge of AI-enabled business opportunities now flooding its databases. Organizations intent on building their AI-fueled “Frontier Skilled” workforce can look to the successes being had in the various regions to ensure their AI adoption strategy is one that promises similar rewards.

 

AI in America: What’s Happening Here

Pam Sornson, JD

March 19, 2024

The United States, as a nation and along with several of its states, is pursuing its goals of legislating governance mandates over the use of artificial intelligence within its jurisdictions. Appropriately concerned about the threats posed by the technology, as well as enthused by its benefits and opportunities, political leaders are seeking to gain some form of control over the as-yet unregulated digital capacity before it becomes too deeply embedded in society in its present ‘wild west’ state.

 

Personal Problem. National Challenge.

The demand for AI regulation grows daily as more individuals experience fraud and loss caused by nefarious AI operatives. The increase in fraud attempts is growing exponentially as the technology infiltrates unregulated – and therefore unprotected – corporate databanks. Messages sent through all channels now mimic the authentic ones sent by trusted merchants and service providers, confusing and misleading their recipients.

Federal agencies are very aware of the challenge: “Fraudsters are using AI tools to impersonate individuals with eerie precision and at a much wider scale. With voice cloning and other AI-driven scams on the rise, protecting Americans from impersonator fraud is more critical than ever,” said FTC Chair Lina M. Khan. The FTC (Federal Trade Commission) has already finalized a rule prohibiting the impersonation of businesses and government offices; it’s now working on a similar regulation banning the impersonation of people.

The FTC’s action is just one avenue America is pursuing in its quest to gain control over rampant AI interferences within its territories. In fact, the nation launched its official AI management strategy in 2019 when the National Institute of Standards and Technology (NIST) released its Plan for Federal AI Standards Engagement. Back then, the main goal was to provide guidance for setting priorities and levels of government oversight into future AI developments to “speed the pace of reliable, robust, and trustworthy AI technology development.” Fast forward four years, and the new goal set includes stopping the overwhelming influx of unwanted AI programming while harnessing its emerging technological capacities to improve national fortunes.

 

Interim Steps

In the intervening years, the United States has made progress in its effort to manage AI resources:

In May 2021, NIST developed its AI Standards Coordination Working Group (AISCWG) to “facilitate the coordination of federal government agency activities related to the development and use of AI standards, and to develop recommendations relating to AI standards [for] the Interagency Committee on Standards Policy (ICSP) as appropriate.”

Also in 2021, the National Defense Authorization Action of 2021 (NDAA) specifically authorized NIST to develop AI parameters for the Department of Defense, Department of Energy national security programs, Department of State, and the Intelligence Community. President Biden signed the ‘action’ into law in December 2023.

The National Science Foundation now offers grants in support of AI research and development aimed at ensuring access to reliable and trusted technology. From this source have arisen the National Artificial Intelligence Research Institutes, which enlist public and private entities to collaborate on potential responses to AI evolutions, both positive and negative.

The U.S. Department of State is busy working with international organizations and governments to integrate wide-ranging AI regulatory efforts into a cohesive whole. The agency strongly supported the principles developed by the Organisation for Economic Co-operation and Development (OECD). It is also a member of the Global Partnership on Artificial Intelligence, which is housed in the OEDC and works to connect the theories underlying AI programming and the practices that emerge in its development.

Progress by these agencies continues to be made.

 

Widespread Federal Efforts

Today, numerous federal agencies are engaged in AI research and development to improve processes, reduce risk and loss, and enhance their capacity to serve their constituents. The national General Accounting Office (GAO) monitors those activities and acts as an overseer of anything that affects the country as a whole, including AI initiatives. The GAO has developed an AI Accountability Framework that guides individual agencies in their AI efforts regarding data management, information monitoring, systems governance, and entity performance. From the federal point of view, AI programming emerging from these departments must be responsible, reliable, equitable, traceable, and governable. The framework guides each agency as it initiates its AI systems to ensure they’re compatible with those mandate standards.

The GAO also tracks those efforts and reports on their progress, and its recent December 2023 report reveals strides being made – and steps to be taken:

Twenty of 23 agencies reported current or expected AI ‘use cases’ where AI would be used to resolve an issue or problem. Of those, NASA and the Departments of Commerce and Energy have found the most number of situations where AI will help national efforts (390, 285, and 117, respectively). More than 1,000 possible options have been surfaced across the government.

There were only 200 or so instances of AI in practice as of 2022, while more than 500 were in the planning phase.

Ten of the 23 agencies had fully implemented all the AI standards mandated for their agency, while 12 had made progress but had not completed those tasks. The Department of Defense was exempted from this review because it was issued other AI mandates to follow.

The GAO report also provides recommendations for 19 of the 23 agencies, which include next steps into 2024 and beyond. For the most part, these recommendations focus on ensuring that organizations downstream from their national overseer (including federal, state, and regional agencies) have the guidance and standards they need to provide appropriate AI implementation within their area. Some of those recommendations include adding AI-dedicated personnel, enhancing AI-capable technologies, and ensuring a labor force that is well-versed in AI operations.

Individual states are also developing AI management resources, although those are focused on in-state needs and opportunities. The White House has devised a new Bill of Rights for AI, and newly proposed federal regulations will impact both the focus and the trajectory of AI activities in the years to come.

 

Artificial Intelligence has arrived, and its influence continues to grow. Gaining control over that growth will allow it to enhance the lives of all of humanity. The United States government is dedicated to embracing its control of AI because failing to master it for appropriate purposes poses potentially existential threats to the entire planet.

 

 

 

AI Regulation: The Current State of International Affairs

Pam Sornson, JD

March 19, 2024

It is a decided understatement to say that there are legitimate concerns about entities using Artificial Intelligence (AI) for nefarious purposes. The opportunities for its misuse are significant, considering its capacity to generate documents, images, and sounds that present as 100% authentic and true.

Accordingly, leaders across most community sectors are discussing and developing rule sets designed to govern the use of the technology. Once those rules are established and embraced, the development and adoption of correlating enforcement strategies and standards will keep the world safe from the misuse of this unprecedented computing capability. At least, that’s the plan …

 

AI Governance Drivers

Governments, industrial leaders, and technology experts all agree that the threats posed by AI are immense.

In July 2023, United Nations Secretary-General Antonio Guterres warned the world that unchecked AI resources could “cause horrific levels of death and destruction” and that, if used for criminal or illicit state purposes, it could generate “deep psychological damage” to the global order.

Also in Summer 2023, the International Association of Privacy Professionals (IAPP) analyzed AI adoption practices in numerous settings to evaluate how regulators might address issues that have already arisen. Recent court cases show that the protections previously enjoyed by software developers might be eroding as more civil liberty, intellectual property, and privacy cases with AI-based fact patterns hit the courts. As ‘matters of first impression,’ the results of these early lawsuits will be the foundation of what will undoubtedly become an immense new body of laws.

All by itself, Generative AI is causing much consternation in computer labs and C-Suites around the world. AI vendors have utilized vast quantities of web-based copyrighted data as part of the software’s ‘training materials,’ and the owners of those copyrights aren’t happy that their work has been co-opted without their permission.

As has been the case with all technology, the promise of AI and all its permutations creates as many concerns as it does possibilities. The world is now grappling to get those concerns under control.

 

AI Governance Inputs

Entities invested in putting controls around AI threats are also working on enforcement systems to ensure compliance by AI proponents. Three early entries into the fray help to outline the scope, depth, and breadth of the challenge all AI developers are now facing as their products become more ubiquitous in the world:

The Asilomar Principles

While AI programming itself has been around for a while, articulated considerations about its safe usage are relatively new. In 2017, the Future of Life Institute gathered a group of concerned individuals to explore the full range of its opportunities and issues. The resulting ‘Asilomar Principles‘ set out 23 guidelines that parse AI development activities into three categories: Research, Ethics and Values, and Longer-Term Issues.

The OECD

The work of the Organisation for Economic Cooperation and Development (OECD) is also notable. The OECD works to establish uniform policy standards related to the economic growth of its 37 market-based, democratic members. These 37 countries research, inaugurate, and share development activities across more than 20 policy areas involving economic, social, and natural resources. As a group, the OECD considers AI to be a general-purpose technology that belongs to no one country or entity. Accordingly, its members agree that its use should be based on international, multi-stakeholder, and multidisciplinary cooperation to ensure that its implementation benefits all people as well as the planet.

The OECD parses its work into two main categories, each of which has five subparts.

Its Values-Based Principles focus on the sustainable development of AI stewardship to provide inclusive growth and well-being opportunities that pursue humanitarian and fair goals. AI developers should make their work transparent and understandable to most users while ensuring the validity of its safety and security capacities, and they should be – and be held – accountable for the programs they create.

For policymakers, the OECD recommends establishing international standards that define the safe and transparent development of AI technologies that function compatably within the existing digital ecosystem. Policy environments should be flexible to encourage the growth and innovation of the software while protecting human rights and also limiting its capacity to be used for less than honorable reasons.

The European Union

On Wednesday, March 14, 2024, the European Union voted to implement its ‘Artificial Intelligence Act,’ the world’s first set of AI regulations. It took three years of negotiations, data wrangling, and intense discussions to achieve … “historic legislation [that] sets a balanced approach [to] guarantee [our] citizen’s rights while encouraging innovation” around the development of AI. The full EU Parliament is expected to vote on the new laws in April; there is no expectation of significant opposition to it.

Fundamentally, the AI Act is focused on limiting the risks AI presents, especially in certain situations. Some AI programming provides rote actions that don’t require intense analysis or oversight, so they don’t need excessively intense regulation, either. Other AI platforms, however, incorporate more sensitive data in their computations, such as those that use private biometric or financial data. Inappropriate use of such sensitive information could be disastrous for the person or people exposed, and the scope of AI technology has the capacity to expand that risk to a much greater scale. The AI Act requires developers to prove their model’s trustworthiness and transparency, as well as demonstrate that it respects personal privacy laws and doesn’t discriminate. Entities that are found to be non-compliant with the AI Act risk a fine of up to 7% of their global annual profits.

 

These are just three of the many governments and organizations working to gain control over the use of AI within their jurisdictions. Leaders in the United States are also focused on the concern. The second article in this edition of the Pulse looks at what’s happening in America as it, too, reels from the immense and growing impact of AI on virtually all of its systems and communities.

 

AI Regulation: Fears and a Framework

Pam Sornson, JD

March 5, 2024

Even while artificial intelligence (AI) offers immense promise, it also threatens equally immense disaster, at least in the minds of industry professionals who recently weighed in on the topic. In a March 2023 letter, more than 350 technology leaders shared with policymakers their concerns about the possible dangers AI poses in its present, unfettered iteration. Their concerns are notable, not just because of the obvious concerns raised by a technology that already closely mimics human activities but also because of the role these AI pioneers have played in designing, developing, and propagating the technology around the world. Requesting a ‘pause‘ in further development of the software, the signatories suggest that stopping AI progress until implementable rules are created and adopted would be beneficial. The break would allow for the implementation of standards that would prevent the technology’s evolution into a damaging and uncontrollable force.

 

Industry Consensus: “AI Regulation Should be a Global Priority”

The 23-word letter is concise in its warning: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” The stark statement embodies the fears arising among industry leaders as they watch unimpeded AI technology permeate more and larger sectors of the global community.

The letter was published by the Center for AI Safety (CAIS), a non-profit agency dedicated to reducing large-scale risks posed by AI. Dan Hendrycks, its executive director, told Canada’s CBC News that industrial competition based on AI tech has led to a form of ‘AI arms race,’ similar to the nuclear arms race that dominated world news through the 20th Century. He further asserts that AI could go wrong in many ways as it’s used today and that ‘autonomously developed’ AI – technology generated by its own function – raises equally significant concerns.

Hendrycks’ thoughts are shared by those who signed the letter, three of whom are considered the ‘Godfathers’ of the AI evolution: Geoffry HintonYoshua Bengio, and Yann LeCun (their work to advance AI technology won the trio the 2019 Turing Award, the Nobel Prize of computing). Other notable signers include executives from Google (Lila Ibrahim, COO of Google Deepmind), Microsoft (CTO Kevin Scott), OpenAI (CEO Sam Altman), and Gates Ventures (CEO Bill Gates), who joined the group in raising the issue with global information security leaders. Analysts with eyes on the tech agree that its threat justifies worldwide collaboration. ‘Like climate change, AI will impact the lives of everyone on the planet,’ says technology analyst and journalist Carmi Levy. The collective message to world leaders: “Companies need to step up … [but] … Government needs to move faster.”

 

Where to Begin: Regulating AI

Even before the letter was released, the U.S. National Institute of Standards and Technology (NIST) was already working on promulgating AI user rules and strategies in the United States. Its January 2023 “AI Risk Management Framework 1.0” (AI RMF 1.0) set the initial parameters for America’s embrace of AI regulation, referring to AI as “an engineered or machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments (adapted from: OECD Recommendation on AI:2019; ISO/IEC 22989:2022).”

Define the Challenge

Framing the Risk

The NIST experts parse AI threats into categories with other, similar national security concerns, such as domestic terrorism or international espionage. Styles of AI technology, for example, can pose

short- or long-term concerns,

with higher or lower probabilities of occurring, and

with the capability of emerging locally, regionally, nationally, or in any combination of the three.

Managing those risks within those parameters requires accurate

assessment,

measurement, and

prioritization, as well as

an analysis of relevant risk tolerance by users.

Assessing Programming Trustworthiness

At the same time, the agency is looking for ways that the technology is already safe to use. Factors that play into this assessment include the software’s:

validity and reliability for its purpose,

inherent security programming,

accountability and resiliency capacities,

its functional transparency, and

its overall fairness for users, to name just a few.

Generating Framework Functions

At its heart, the AI RNF 1.0 enables conversations, comprehensions, and actions that facilitate the safe development of risk management practices for AI users. Its four primary functions follow a process that creates an overarching culture of AI risk management, populated by the steps needed to reduce AI risks while optimizing AI benefits.

Develop the Protocols

Govern: The ‘Govern’ function outlines the strategies that seek out, clarify, and manage risks arising from AI activities. Users familiar with the work of the enterprise can identify and reconfigure those circumstances where an AI program might deviate from existing safety or security requirements. Subcategories within the ‘Govern’ functions apply to current legal and regulatory demands, policy sets, active practices, and the entity’s overarching risk management schematic, among many other factors.

Map: AI functions interact with many interdependent resources, each of which poses its own independent security concern. AI implementation requires an analysis of each independent resource and how it would impact or impede AI adoption and safe usage. This function anticipates that safe and appropriate AI decisions can be inadvertently rendered unsafe by later inputs and determinations. Anticipating these challenges early on reduces their opportunity to cause harm in the future.

Measure: This step naturally follows the Map function, directing workers to assess, benchmark, and monitor apparent risks while keeping alert to emerging risks. Comprehensive data collection pursuant to relevant performance metrics for functionality and safety/security gives entities control over how their AI implementation functions within their organization and industry as it is launched and throughout its productive lifecycle.

Manage: Activities involved in managing AI risks are outlined in the Govern functions, giving organizations strategies to respond to, oversee, and recover from AI-involved incidents or events. The Managing function anticipates that oversight of AI risks and concerns will be incorporated into enterprise actions in the same way that other industry standards and mandates are followed.

 

America is one of many entities working to establish viable controls over a potentially uncontrollable resource. Other nations, groups of nations, industries, and public and private companies are all engaged in creating regulations that allow for the fullest implementation of AI for the public good while reducing its existential threat to the world. The optimal outcome of all these efforts is a safe and secure technology that advances humankind’s best interests while helping it reduce the challenges it creates for itself.

AI Regulation: A Mandate for Management

Pam Sornson, JD

March 5, 2024

On October 30, 2023, President Joe Biden issued an Executive Order compelling the development of new safety and security standards for Artificial Intelligence (AI) technologies. By doing so, the President acknowledged the role AI is already playing – and will continue to play – in the nation’s economic, industrial, and social development. He urges interested and invested parties to protect the community and promote fair and reasonable ideals when adopting its use. The Order represents just one aspect of the current global push to gain control over the ethical use of AI.

 

Organizing Innovation …

Regulating AI technology presents a unique set of challenges that must be addressed if the digital asset is to be safe to use and add more value than hazard to the community. A myriad of individual AI opportunities coalesce into a constellation of concerns raised by the programming. Each iteration and issue individually requires a dedicated regulatory response; together, they present a massive mission to the regulatory agencies responsible for AI’s oversight. The challenge is to coordinate the global effort to put guardrails around the tech to optimize its assets without unnecessarily impinging on its capacities.

 

… To Enhance the Good …

Just a short list of adopted uses for AI reveals the scope and extent of rules needed to maintain its integrity:

Connectivity – Assets included in AI programming are already connecting services and their providers to create an enhanced team capable of more together than each offers individually. In the healthcare sector, as an example, AI-powered programs are already conducting triage functions, issuing preliminary diagnoses, and sharing critical health information with far-flung medical team professionals at a pace unmatched by traditional collaborative methods.

Energy Management – The ever-expanding ‘distributed’ energy sector is also embracing AI opportunities to improve performance and build in reliability. Traditional community-wide power systems used centralized power stations to direct energy resources to their customers. Over the past few decades, however, adding in-home power options (solar, wind, and geothermal) resulted in a reversal of power flow. These resources now feed energy into the shared grid, and owners experience reduced energy costs or even receive revenue checks for their contribution to the network supply. The consequence is a power system that is vastly more complex than the original version, and the new iteration requires much more hands-on control. Emerging AI programs promise to streamline and coordinate that evolution.

Logistics – The COVID-19 era demonstrated the critical role that logistics play in the global economy, as supply chains failed, leaving millions of people and businesses without the goods and services they needed. AI in this sector is revolutionizing supply chain management and control. Automated warehouses populated by robots that don’t eat or sleep now provide a significant proportion of the physical labor involved, while sensors and cameras track the movement of goods through the system from their original creation to their ultimate destination. The efficiency level rises impressively in AI-enhanced facilities, which promises even more innovation to further the adoption of AI programming into the industry.

Each of these sectors involves thousands of companies and millions of people, all of which can influence the integrity of the respective system. Without regulation, the actions of any one person or entity can generate disastrous consequences for the others.

 

… And Manage the Bad.

Another short list of AI realities reveals the threats that an unregulated resource poses:

Job Losses – Automation – using robots to perform functions previously managed by humans – has already caused millions of job losses. In 2023, 37% of businesses responding to a New York-based survey stated they had replaced human workers with automated, AI-driven programming. Customer service workers were at the top of the list to be eliminated, followed by researchers and records analysts. Almost half (44%) of survey respondents indicated that more layoffs were likely in 2024.

Disinformation – The term ‘fake news‘ became ubiquitous in the past decade as political entities sought to control voters using false information to influence their actions. Using AI to produce and share inaccurate or manufactured data creates fundamental challenges for every entity that relies on its accuracy and veracity to maintain its credibility. When taken to extremes, the false commentary amounts to propaganda, which some assert is the ‘world’s ‘biggest short-term threat.’

Security – Data security – keeping personal, corporate, and governmental information safely secured behind impregnable walls – has been top-of-mind in most industries for a long time, and vast magnitudes of rules and regulations have evolved to protect it. AI presents a novel iteration – it generates ‘new’ information that may or may not be accurate or truthful. The computer program that created this ‘new’ information isn’t ‘aware’ of or concerned about its veracity (computers don’t ‘think’) and treats it like any other data bit within its reach. Consequently, the responses returned by the program are only as accurate and reliable as its affiliated databases. When those aren’t reliable, the AI response won’t be trustworthy either. Researchers note that AI software can be ‘gullible’ and produce manipulated responses when fed leading or misleading questions. It is also corruptible. Programmers can manipulate AI’s function by ‘poisoning’ its databases with false data. The action trains the technology to respond according to those directives and not pursuant to legitimate AI functioning. Without focused interventions, AI has the capacity to perpetuate biases and inequities when those influences are already programmed into its information stores.

 

AI presents an infinite number of iterations and permutations that can be used for good – or evil. Without any sort of regulation on its use, its misuse poses a significant threat to virtually every corner of the global community. And, with so many entities now contributing to the AI universe, both legitimate and illicit, governing bodies are appropriately focused on putting guard rails on all AI efforts in an effort to maintain global stability and industrial reliability. The President’s Executive Order is one step toward gaining control over this emerging technology that offers so much promise for a healthier, more productive future for the world.

Generative AI: A Digital Double-Edged Sword

The benefits of embracing artificial intelligence (AI) are apparent. Already, the software is providing previously unimaginable service to many industries, such as speeding access to healthcare, streamlining education processes, and resolving previously unsolvable problems. However, as bright as its promise appears to be, the programming also poses significant threats to the entities that rely on it for both work and life functions, including today’s typical consumers. Any person or company that enjoys the assets offered by AI should also be aware of its vulnerabilities so that they don’t become victims of the burgeoning AI crime sector.

 

A Wolf in Sheep’s Clothing?

Generative AI (GenAI) is a subset of the same programming that provides ‘traditional’ AI functions. They are both ‘trained’ software programs, meaning that they are each designed to pursue specific goals in relation to the also particular parameters of their developer’s strategy. Both styles scour billions of data bits to locate appropriate responses to user demands, and each provides data collection, analysis, and computing capacities that are far beyond those of humans.

However, the two AI aspects differ in one crucial way, and that difference is what makes GenAI so much more dangerous than the other:

Traditional AI offers responses based on the existing data within and the structure of its databases. It can only respond if that data is contained within its research resources.

GenAI, on the other hand, can create responses that are original to its search-and-create process. It takes existing information and spins it into creative narratives that are totally unique to it. It uses existing data as a learning template to develop new and original ‘thought.’ The challenge arises from the fact that it is often impossible to tell a human-generated piece of content from that generated by the computer. Consequently, since computers are not bound by the very human-mandated principles of ethics and morals, their products can appear completely valid and authentic even though they are rife with falsehoods and completely unreliable.

Simply put, GenAI can act as an independent ‘person’ by producing original content with the same sense of authority and integrity as that produced by its human counterpart but without adherence to the rules, regulations, and standards that bind that human effort.

 

Creative Chaos Ensues

The application of GenAI for less-than-honorable purposes is already rampant:

‘Deepfakes’ are images generated from real pictures that manipulate their truth into something it is not. This type of GenAI often combines data from a variety of sources to create a plausible, realistic ‘new’ version of those combined concepts. Nefarious operators often distort images of political figures (as examples) to reflect a depiction of their personal philosophy, thereby misleading their consumers into believing that the revised version is a truthful one.

‘Phishing’ offers another opportunity to distort or bury the truth from trusting users. Phishing activities typically use email or text prompts to entice consumers into clicking on links that appear to come from a trusted source and that offer a valuable and welcome service. However, those links actually direct the user into a ‘dark’ space on the ‘web where their confidential, personal information – birthdates, banking info, healthcare specifics, etc. – is quickly extricated and stolen.

ChatGPT is, perhaps, the best-known example of GenAI. This software produces human-like results that are often indistinguishable from those presented by actual humans. To users who are not aware of the concern, ChatGPT’s creative products are frequently accepted as valid and authentic, as if a person, not a machine, generated them.

There are many ways for nefarious GenAI to manipulate the community to achieve its developer’s intended goals:

Manipulating images also frequently violates copyright rules, as the developers rarely seek out the trustworthy source of their information. Organizations that use GenAI programming to pursue their proprietary ambitions might unknowingly run the risk of compromising another entity’s legally protected assets.

‘False’ images created with GenAI can also amplify existing biases and prejudices, making current social issues even more dangerous. A recent analysis of ~5,000 pictures created using text-to-image software (a form of GenAI) revealed that the images generated showed extreme gender and racial stereotypes. Unsuspecting viewers might believe that those destructive distortions are actually verified versions of reality.

As noted above, inappropriate disclosure of sensitive information is also problematic when the computer program doesn’t discern it or won’t conform to the boundaries designed to protect that data.

 

Harnessing GenAI for Good

GenAI still offers users tools to achieve their goals and ambitions despite the threat of misuse and exploitation marring its opportunity. By carefully learning and following directives for appropriate and valid GenAI uses, every entity can maximize the value provided by the software while reducing their exposure to inappropriate outcomes:

Use only publicly available GenAI tools. There are commercially available software packages that facilitate exceptional AI functions without exposing users to the challenges of the format. Many of these digital programs are designed to address the demands of specific industries, as well, so adopting the technology might also be a mandatory step to retain the company’s current market share.

Invest in customization services. Every organization operates differently from its competitors, and those distinctions are also often its unique ‘claim to fame’ within its sector. AI programming can be modified to both protect and highlight the existing presence while also adding new values and opportunities for consumers.

Add data feedback and analysis tools to ensure the software is – and remains – on target for its purpose. Data in all its forms is the true currency of society these days; corporate databases contain myriads of unexplored information that can be harnessed to produce even more success for the company. AI programs will use that newly discovered data to refine their functions, providing even more value to the enterprise.

 

Both traditional and generative AI offer tremendous benefits as well as potential threats to every organization. Accordingly, the process of adopting the technology and adapting it for proprietary use should be carefully mapped out and managed. Those companies that master the project can also gain significant advantages in a sector where such advances may still be unexplored, which was, after all, the purpose behind the development of artificial intelligence in the first place.

 

 

Instructional AI: Advancing Education’s Capacities

Pam Sornson, JD

February 20, 2024

Like virtually every other type of modern technology, the usage of ‘artificial intelligence’ (AI) has triggered great fear, effusive elation, and almost all emotions in between. However, also like other modern technology, AI’s bona fide impact on society is not fully understood as so many of its capabilities – real and potential – have yet to be explored. Despite the challenge of not knowing whether its threats are actual or imagined, AI is making inroads into many cultural and societal venues, including that of higher education. How is it in use today? How might it be used tomorrow? And is its presence in the classroom a boon or a bust for modern learners?

 

AI as a Ubiquitous Tool

Most people already interact with AI, even if they don’t know that’s the programming they’re accessing. The digital resource drives virtually all of today’s smart devices, using intuitive and insightful strategies to facilitate a myriad of services all through a single portal.

Mapping programs steer users through the physical world, offering directions, time-to-destination, traffic notices, and even restaurant and entertainment suggestions.

The name (a noun) of today’s most popular search engine is now also a verb, and many people use it to describe how they found their latest new gadget or problem solution.

Even customer service ‘providers’ are more AI today than they are human. The now ubiquitous ‘chatbot’ – literally a ro’bot’ that ‘chats’ with people – responds to most online inquiries and is often the only ‘human’ resource consumers have contact with when looking for answers to their concerns.

It’s most likely that even those who fear AI and its potential threats use the resource as a regular part of their typical day.

And that exposure to and use of AI – and its closely related cousin, machine learning – isn’t diminishing either. The technology has the capacity to improve itself, and many cutting-edge computer programs in use today are programmed to enhance their own internal functioning. The potential values offered by these ‘smart machines’ make them increasingly desirable as business assets, so investments in AI are growing at a notable rate.

‘Healthcare tech’ uses AI to capture data, generate reports, connect medical teams, and inform patients. ‘Telehealth,’ the delivery of medical advice over an internet connection, is becoming the most popular method for connecting with healthcare professionals. During the COVID-19 pandemic, the number of telehealth appointments rose from 5 million in 2020 to over 53 million in 2022.

‘Smart assistants,’ such as Siri and Alexa, now monitor home and office systems to modulate room temperature, lighting, door access, and more. In 2022, more than 120 million American adults were using these assistants at least once a month.

The use of digital payment portals is also on the rise. Most of today’s banks use AI technology to interact with all their customers, both individuals and businesses. The tech facilitates 24-hour access, online deposits, withdrawals, and other services and maintains monitoring capacities over billions of dollars of financial and other assets.

Analysis of how AI technology is growing as a fundamental corporate asset shows that it is also quickly becoming foundational to numerous existing and emerging industries. Many of these industries – and the countless companies that make them up – are embracing the AI opportunity in the post-COVID era to revise their economic foundation and that of their community.

 

Instructional AI in the Spotlight

Clearly, AI has been successfully embedded in both personal and corporate daily activities for some time, so it’s not surprising that many industries, including the education sector, have also adopted the technology into their daily activities. In schools, AI programs and applications are typically identified as ‘Instructional’ AI, and those, too, have been around for several years. Their uses span the gamut of educational functions:

In some cases, the tech is used to track student activities. A well-programmed AI service can track attendance, test scores, course availability, and other data-rich nuances of the learner’s experience to inform school admins about their progress and facilitate fine-tuning of their overall educational adventure.

AI is also proving to be extremely valuable in streamlining educational resources to meet the needs of each individual student. Data collected reveals where those learners are experiencing challenges and where those challenges are originating. Sometimes, it shows that the person needs additional support; other times, it demonstrates that the school has missed the mark for serving this particular person or class of people.

Perhaps the most common implementation of AI in educational processes, however, is its use in instructional design. The designers of programs, courses, and the classes that those comprise are constantly improving their resources to better meet student needs, with the goal of enhancing the learner’s absorption of the content to achieve ultimate success in their chosen subject matter. AI tools give these ‘education architects’ the capacity to deliver a deeply personalized course content that responds to the individual learner’s past performance, learning pace, and preferences. Further, in addition to structuring the course in a format more compatible with the students’ preferences, AI also facilitates adaptations to the system in real time. Data collected as the course unfolds informs the programming, which can then modify modules or lesson plans accordingly.

AI also offers an asset that’s already well-favored in the community: its gaming capacity. Game-based learning opportunities that are enhanced by AI can provide more individualized and flexible learning opportunities for virtually all students, which can also increase their likelihood of persisting to graduation and then finding the job and career that best suits their needs.

 

The use of AI as a learning tool is growing as more higher education institutions adopt it to service their myriad of programs and workflows. For students, the digital asset is proving to be an invaluable addition to their education, so long as they use it with integrity. For the higher education sector in general, AI also offers the opportunity to develop whole new avenues of courses and careers to meet the burgeoning demand for skilled AI technicians. At least at this first glance, AI is performing exceptionally well for the education community.

Re-Valuing Labor: Ousting Obsolescence

Throughout 2023, The Pulse explored how society values’ labor’ – how it recognizes and compensates the effort provided by workers for their employers. The research revealed that, in many cases, that value is set not by the actual work done but instead by the relative value of the person who performed it:

In too many cases, women and people of color are deemed to offer less value than caucasian men, even when they do the exact same work.

A significant percentage of global labor efforts are completely uncompensated, which suggests those ‘workers’ offer no value. Contrary to that thought, however, is the reality that their effort contributes significant value to the community by providing social and altruistic benefits while alleviating the burden on public funds

Sometimes, the workers exposed to the highest risks while on the job were also the lowest-paid. The COVID-19 pandemic clarified the exceptional value of the “essential worker,” whose ‘menial’ labor was determined to be vital to protect the health and safety of their community.

The advent of digital automation as a labor provider has also upended many work-world norms. In some cases, human workers were eliminated altogether in favor of a more accurate and speedier machine. (In other cases, however, the technology augmented the human effort, creating more value for the company.)

These realities reflect an entrenched perspective of the ‘value of labor’ as that has been defined by centuries of human development and evolution. Effort was assigned ‘value’ based on the time it took to complete, its relative simplicity, the personal characteristics of who was providing it, or the volume of product rendered within a specified period. That long-standing but haphazard ‘labor valuation’ practice has caused a wide and growing gap between the compensation paid for services and the actual value they provide to the community.

 

Outputs vs. Outcomes: A Critical Difference

However, the calamities of the past few years appear to have triggered a shift away from the norm of ‘work value’ being appraised based on who is doing it, not on the benefit it confers. The opportunities that have emerged through the COVID years – digitization, remote work, Artificial Intelligence – and the pressures generated by those phenomena are driving a re-imagination of how ‘labor’ is valued. Instead of using piecemeal time measurements or a rote count of widgets produced, industry leaders are now contemplating how to engage with all the assets offered by their employees to improve the company’s fortunes as a whole. Organizations of all types are now reviewing their current workforce deployment strategies to determine if they would fare better if they altered their traditional worker and occupational expectations.

Shifting Focus …

What’s becoming more apparent is that emerging workforce options can offer significant benefits to those entities willing to embrace them. Enterprises are taking a broader look at their workforce to determine if unexplored potential within that resource might serve the organization better. In many cases, jobs can be automated with technology, which frees the human worker to provide a more intentional service. In other cases, a re-analysis of the full spectrum of advantages offered by a particular occupation or worker can attain significantly more value when viewed through a ‘corporate benefit’ lens instead of a ‘payroll expense’ lens:

How would the company function if workflows were designed to expand on or augment employee talents and skills beyond their rote mechanics?

How can workers contribute more to achieving corporate goals rather than just adding numbers to corporate outputs?

How can the company leverage the outputs of its human resource  assets to improve overall productivity and profitability?

One possible response to each of these questions is to shift the corporate mindset away from bean-counting and toward value creation.

 

… to a New Corporate Reality …

As is usually the case, shifting corporate culture to embrace new opportunities requires a dedicated strategy that addresses both the old elements that need changing and the new elements that introduce revised expectations.

From an overarching perspective, the entire organization can focus beyond earlier metrics (cost savings or efficiency levels, as examples) and toward ‘value creation’ in general. What are the company’s most significant barriers to growth, and how can revising the effort of its workforce address those challenges better?

The company can also determine which elements are best suited for automated labor and identify where human effort may be wasted. In these instances, adding technology to do rote work frees the employee to contribute other value to the company. New technology might also augment that worker’s effort, increasing its value while reducing the cost to attain that enhanced benefit.

Enterprise leaders might also review each job description and activity to discover the highly valuable but often hidden ‘soft skills’ embedded within it. Most employees can be taught the manual skills needed to perform specific tasks, but their intellectual capacity to analyze and make changes on the fly is what makes them more valuable as contributing employees.

Organizations are embracing this ‘changed mind’ and creating jobs and occupations that facilitate both more flexibility and respect for the worker and also improved productivity for the business.

 

… That Embraces ALL Possibilities.

But they can’t stop there. Recent data reveals that the inequities that were so deeply entrenched in the workplace habits of the past have not been fully alleviated post-pandemic:

Women still need to catch up to men in their economic recovery after the pandemic. One reason they lag behind is that they frequently held those jobs that were most impacted by COVID-induced closures, including any job that required face-to-face exposure. Women also perform most of the community’s caregiving services (frequently unpaid work), and that burden increased during the coronavirus era.

Those without post-high school education credits are also still suffering economic depression caused by the pandemic. The workforce population with bachelor’s or higher degrees has grown by 6.9% over numbers posted in April 2020, while the group without that educational achievement is 3.2% smaller than it was at that time.

Color and ethnicity also remain factors in the employment sector. Unemployment for Blacks and Latinos stands at 5.1% now, while the white unemployment level is only 3.5%.

Both companies and their workforces are aware of the evolutions now occurring in today’s labor markets. New occupations are developing to replace those now deemed obsolete, while emerging technology and workforce philosophies are driving innovation in hiring and retension strategies for high-quality workers. The situation offers hope that, despite the losses and destruction caused by COVID-19, it did open doors – and eyes – to new ways to engage with and reward the human resources that truly manage the global economy.

 

Re-Valuing Labor: Reforms for Workers

Corporations aren’t the only entities that are questioning how ‘work’ should be valued in the post-COVID environment. Workers are also revising their relative merit within their employers’ organizations and realizing that they want and need more than just a paycheck from their jobs. Over 48 million Americans left their jobs in 2021, and over 50 million did the same thing in 2022, a development now known as the Great Resignation. Gartner labels the phenomenon driving the exiting workers the ‘Great Reflection,’ as they seek more meaning in how they spend their time; old-style occupational valuation makes them wonder if the ‘nine-to-five’ obligation is still worth their effort. Companies looking to retain their experienced workforce and slow the talent and human resource drain are now also seeking ways to respond to the increasing demand that ‘working’ should also afford ‘meaning.’

 

The Pandemic’s Short-term Impact on Workforce

The COVID-19 pandemic caused unprecedented changes in how the world works. That global health crisis triggered an equally global re-evaluation of what is truly important in life, as millions of people succumbed to the virus and millions more were compelled to move on in the face of those losses. In response to those revelations, thousands elected to change their work circumstances to better reflect what is truly meaningful to them. Many people are no longer willing to spend precious time working in occupations that provide a paycheck but little more.

Recent research reveals several reasons why there was a mass exodus of workers from every kind of occupation throughout the course of and after the pandemic:

Many felt unappreciated by their employers, who paid them less than they knew they were worth. In a Pew Research survey, 63% of respondents said low pay was a top contributor to their decision to leave.

Many respondents (also 63%) quit because their occupations offered no opportunity to advance their careers. They cited a stagnant job placement with little or no control over when or how they performed their work. That lack of control over work hours was particularly aggravating, especially in cases when the work itself was not time-sensitive. They determined that maintaining that unfulfilling and limiting employment status quo was no longer an option.

Coupled with the low pay was the lack of respect many workers felt while on the job. More than half (57%) of the survey group stated that their employers did not notice or recognize their full slate of talents and skills.

Others left because they felt overworked or underworked by their organization, had challenges obtaining child or family care, or needed benefits that weren’t offered (paid time off or healthcare insurance, for example). Still others decided it was as good a time as any to relocate to a new community better suited to their needs and tastes.

In all cases, workers determined they wanted a better life than their current job could give them and elected to move on despite the risks entailed in that process.

 

Impact on Industry

The mass resignation phenomenon obviously impacted employers and industries, too. Typically, companies don’t consider the potential problems that might arise if their workforce suddenly shrinks or if key workers elect to go elsewhere. Those that were unprepared often found that they could not respond appropriately to their customer’s needs without an adequate staff, which caused many businesses to fail.

Also, some industries were more affected by the phenomenon than others. Throughout the pandemic, the education, healthcare, retail, hospitality, and transportation industries experienced significant workforce shrinkage. In some cases, jobs just disappeared as public health overseers implemented ‘social distancing’ and other coronavirus safety measures. In other instances, occupations were permanently ‘retired’ as machines took over the labor. And in many circumstances, job openings were left unfilled because no one wanted to do the work. Certainly, today’s current workforce and employment trends are decidedly different than they were just three years ago.

Impact on Workers

Fortunately, the mass resignation episode did not also result in higher unemployment. Instead, many job seekers were able to leverage their skills and talents into positions that better matched their capacities and their preferences. Savvy potential employers, also very aware of the ongoing employment migration, had modified their open positions to facilitate ‘sustainable careers‘ for these new hires. They added benefits and other employee-facing resources that met the candidate’s newly elevated expectations. Additionally, many organizations adjusted the expectations of unfilled job openings to emphasize the ‘soft skills’ that enhance daily activities. This skill set includes analysis, reasoning, and decision-making abilities that extend beyond typical day-to-day duties. Workers who take these hard- and soft-skilled jobs often make better money, have more flexibility in their work conditions, and can actually have an influence on the fortunes of the enterprise. Not surprisingly, these ‘enhanced’ occupations are very attractive to workers who have been asking for what they offer.

 

The Pandemic’s Long-term Impact on Workforce

New data suggest that the Great Resignation is receding or is over, as the ‘quit rate’ returned to the average rate experienced in 2019. But that doesn’t mean the work world has returned to its former self. Instead, many organizations have embraced the expectations of their workers as the ‘new normal’ and are taking steps to provide a truly ’employee-friendly’ work environment. They are adding to every employee’s hiring package benefits and perks that were previously reserved only for the upper echelon of leadership. And, as workers return to work (both in the office and remotely), they gain substantially better occupational situations than those they left behind. Many companies now offer, as a matter of course:

mental heath support (often in conjunction with better physical health benefits, too),

financial support in the form of enhanced 401(K) plans, coaching, and even tuition reimbursements for upskilling workers and

additional resources for child and family care.

Companies are also reviewing how their work gets done. Hybrid positions where the employee works remotely some of the time are becoming more familiar in occupations where location isn’t important. The candidate pools are changing, too, as leaders look for talent in the diverse communities that had been overlooked in the past. And everyone is adding technology to perform rote and routine tasks and to augment intellectual inputs made by human workers.

 

From all perspectives, it appears that the COVID-19 pandemic and its fallout forever changed how the world works. Looking forward, the businesses that will achieve the greatest success will also be the ones that give their employees the attention and consideration they seek.

 

IIJA Infrastructure Investments and Projects

The Infrastructure Investment and Jobs Act (IIJA) went into effect in 2022 and has been stimulating projects and proposals across industries ever since. Its intent is twofold:

To repair the aging infrastructure that is failing after decades of excessive wear and erosion, and

To build a national workforce capable of maintaining the new foundation while also building capacity for continuing evolution and growth.

The Bipartisan law intends to address several pressing national needs through these investments:

Ensure a safe and functional physical infrastructure and a robust economic foundation for future growth;

Provide all communities with new resources for building their specific economies;

Reduce the inequities that remain so solidly in place in many long-entrenched social systems and

Provide training and employment for millions in both traditional and newly emerging occupations and careers.

The scope of the law is immense. The opportunities it promises for further expansion and advances across all industries are unlimited.

 

One Broad Approach. Many Narrow Targets.

Repairing What’s Broken

In addition to repair costs, the law allocates over $550 billion over eight years toward new developments in digital connectivity, energy systems, and transportation networks to accommodate today’s burgeoning demand for these services.

Projects to expand internet assets through new ‘middle-mile’ infrastructure (the digital bridge that connects data to the users that seek it) will receive some of the $65 billion allowed for that purpose through the Broadband Equity, Access, and Development (BEAD) Program.

A cleaner, more efficient energy industry is also a target of the IIJA. It will invest $65 billion in energy innovation, carbon management, farming and forestry developments, and cleaner transportation systems. Just one goal for this aspect of the bill is to reduce the country’s reliance on fossil fuels and its second-highest-in-the-world CO2 emissions. The United States currently emits 14% of the global CO2 exhaust each year, behind only China (39%).

It’s also spending over $100 billion on climate change initiatives and electric grid restructuring in response to the regional flooding, storms, and power outage catastrophes that have occurred in recent years.

These ‘repair and rebuild’ investments are responsive to decades of spending cuts that have left much of the nation’s foundation in tatters.

 

Building What’s Needed

Not insignificantly, the IIJA is also looking to develop ‘best practices’ within systems that impact everyone. The law sets aside funding to establish ten Centers of Excellence affiliated with a planned ‘National Center of Excellence for Resilience and Adaptation.’ This network of government agencies, industry experts, and higher education think tanks will focus its efforts on improving the nation’s transportation and logistics resiliency in the face of worsening weather and climate events. Each Center will receive funding for research and development of resilient transportation, energy, and climate change projects that respond to circumstances within its region. The goal is to prevent the damage and loss experienced in the past by communities living in underdeveloped areas that lack these resources.

Notably, the law’s originators see the administration of each Center of Excellence to be managed (potentially) by a local higher education institution. As coordinators, the schools will administer the $500 billion in new money for ‘surface transportation’ projects, and the schools themselves could also be ‘centers of excellence’ within their region, which would make them eligible for IIJA project funding, as well.

That administrative role is crucial, too, to the success of the overall IIJA project. Within the transportation sector alone, there are 11 key programs established to launch technology deployments and perform advanced research. The work is designed to result in a “future-proofed transportation system that is data-driven and evidence-based”:

Strengthening Mobility and Revolutionizing Transportation (SMART) Grants ($1B, new) to advance smart urban technologies and systems to improve transportation efficiency and safety.

University Transportation Centers (UTCs) ($500M, expanded) – to advance transportation expertise at two- and four-year colleges and universities.

Advanced Transportation Technologies and Innovative Mobility Deployment (ATTIMD) ($300M) – for developing advanced transportation technologies and innovative mobility deployments.

Advanced Research Projects Agency-Infrastructure (ARPA-I) (new) – A new agency focused on leveraging science and technology to address efficiency, safety, and climate goals for transportation infrastructure.

Open Research Initiative (authorized at $250M, new) – to manage unsolicited research pitches that address unmet DOT research needs.

Nontraditional and Emerging Transportation Technology Council (institutionalized) – the now legally entrenched NETT Council will identify and resolve gaps associated with nontraditional or emerging transportation technologies.

Transportation Research and Development Five-year Strategic Plan (renewed) – The USDOT Research, Development, and Technology Strategic Plan guides Federal transportation research and development activities.

Smart Community Resource Center (new) – incorporating resources from the USDOT, Operating Administrations, and other Federal Agencies to develop intelligent transportation systems

Joint Office of Energy and Transportation (new) – A DOT and DOE (energy) partnership to study, plan, coordinate, and implement issues of joint concern.

Transportation Resilience and Adaptation Centers of Excellence (TRACE) Grant Program (Authorized at $550M, new) – See above.

In the DOT, more than $4.5 Billion in Research Activities are now and will continue to be focused on a range of critical priorities, including vulnerable road users, the impacts on roads from self-driving vehicles, reduction of driver distractions, and emerging alternative fuel vehicles and infrastructure.

 

With new funding available for almost countless projects that embrace thousands of neighborhoods and millions of people, the IIJA promises to add significant value and better living opportunities for all its communities. To achieve the law’s fullest fruition, however, the effort will also need a workforce development pipeline that delivers precisely trained labor to build and maintain both existing and new resources. For that purpose, the IIJA is looking at the country’s network of community colleges as its training and credentialling foundation. And for that to become a reality, those schools – all 1,038 of them – will have to reidentify themselves as workforce development hubs. That project, too, is in progress. Read on.

The IIJA, WFD, and the CCCs

How and where to allocate available funding is a critical element of any infrastructure project, especially if there is more than $1 trillion to be distributed. The Infrastructure Investment and Jobs Act of 2021 (IIJA) was specific in its focus when determining how to best use its $1T asset value. Four central U.S. government departments – the Departments of Energy, Commerce, Transportation, and Labor – are the primary beneficiaries of the funds, which are to be used to improve and innovate foundational community resources as well as develop the workforce that will keep those services working and in good condition. The ‘workforce development’ aspect encompasses more than just providing funding to pay workers. It also covers the costs of the incremental steps needed to get those workers ready to go: policy development, implementation strategies, and educational reorganization are all necessary to ensure the resulting labor force is well-trained for the work it is expected to perform. Consequently, the IIJA is facilitating an overhaul of the higher education sector to build the country’s future workforce training programs.

 

 

Demand Drives Distribution

The IIJA doesn’t specify how training should occur or who should be doing it. However, one logical choice to provide those services would be training institutions already embedded in the community: the nation’s community colleges. These local academic and technical schools are already on the front line of the labor development initiative. With IIJA funding, they can adapt existing programs and build new ones to train students for both current and future occupations. And a vast array of new jobs will emerge as the economy absorbs modern labor innovations, such as artificial intelligence and automation. Across three federal agencies, almost $500 M is allocated for training purposes:

The Department of Energy (DOE) has $160 M available for both career skills training and to expand its network of energy engineers through the development of more Industrial Assessment Centers.

The Department of Labor (DOL) has $50 M in grant funding through its Strengthening Community Colleges Training program.

The Department of Transportation (DOT), tasked with perhaps the most significant aspect of the bill, will share up to $280 M over five years for training the workforce needed for its Low and No Emissions Bus Program. Additionally, the Bipartisan Infrastructure Law (BIL), passed in November ’23, continues existing DOT funding for ‘surface’ transportation projects and advances funding appropriations for supporting services through 2026. These funds will add resources – and workforce – to ensure the nation’s air space, highways, railroads, and commercial waterways are safe for use (especially in light of climate change), modernized to optimize digital resources, and equitable.

Each state is empowered to spend its share of the federal opportunity as it sees fit, according to its unique industrial, social, and educational challenges. However, the national legislation does have some caveats about how at least some of the money must be spent. Eligible workforce development projects funded by the BIL, for example, must meet four primary criteria:

Increase women and minority participation in the workforce;

Fill workforce gaps – provide workers for jobs that are currently unfilled;

Add digital and related training elements to support emerging transport realities and

Attract new investment opportunities to incorporate new revenue sources into the economy.

With these parameters in place, Governors and other workforce development leaders are free to fund (among many options) student tuition sources, apprenticeship developments, collaborative organizations, and outreach activities that unite the entire community – schools, businesses, industries, and government – behind the workforce development initiative.

 

 

States Select Strategies

At the state level, where the funds will be spent, stakeholders must determine not just where to invest but also the processes necessary to ensure those investments pay off. Analyses of local, regional, and state-level economies will suggest where these processes might begin:

Addressing the most pressing community needs – food scarcity and lack of housing are often caused by insufficient or non-existent work opportunities;

Clarifying relevant stakeholders and their potential for collaboration;

Enhancing existing workforce development activities and practices, or

Filling existing workforce needs and gaps.

These are just four of the many challenges that can be remediated with BIL funds.

 

 

Enlisting Educators

At the heart of the workforce development project is the training facility. Schools designed to provide occupational and vocational education – including community colleges, technical schools, trade schools, etc. – have many options open to them that can be funded with IIJA dollars:

Staff upskilling and training – the pandemic demonstrated that today’s education tools are much more varied than books and paper.

Professional development – see above.

Incorporating internships into existing workflows.

Partnering with community members to facilitate apprenticeships.

Outreach to bring more invested parties into the conversation and the project.

The state governments that are tasked with allocating these funds can streamline their decision-making processes by creating and pursuing a goal set that builds on existing resources:

They can leverage current educational capacities by understanding what each program offers, tweaking it to fit needs better, or expanding it to serve more people.

They can use existing data in conjunction with emerging information and digital technologies to clarify precisely where demand lies and solutions might be found.

They might set up a governing body to oversee, from a 30,000′ level, the activities, outputs, outcomes, and results of IIJA investments. Interim reports can reveal where things are on target – or not.

Engage with their industrial community, especially the businesses and organizations that are already invested in the local community. Businesses thrive with well-trained employees, so business owners should have some influence on what it takes to create that human resource.

 

The IIJA is funding America’s renewed infrastructure framework by focusing its resources on the civic, social, and industrial opportunities that benefit the country as a whole. The ultimate goal of the endeavor is to enhance and bolster the nation’s workforce to ensure the economy has the labor it needs to maintain stability and add growth. Facilitating that process are the country’s middle-skill schools – the trades, unions, and community colleges – that are evolving into the workforce development hubs that America needs to achieve these ambitions.