Table of Contents
A salaryman (サラリーマン) is the Japanese term for a salaried white collar worker who is understood to enjoy tenure of employment in exchange for undertaking the risk of death through overwork—karoshi ( 過労死). In the United States, a comparable implicit contract between management and white collar workers was lifetime employment and a gold watch and pension upon retirement in exchange for loyalty, meaning variously covering for the boss, reticence to push for salary increases, refusing to entertain offers from competitors and conforming to other norms of social behavior prevalent in the workplace. This model has all but disappeared from the U.S. workplace. It has been replaced by planned insecurity, emphasis on the employment at will doctrine, outsourcing, contract positions and other means of minimizing labor costs.
It's no wonder, then, that the prospect of labor with marginal costs approaching zero takes anxiety about job security to new heights.
Not to worry (so much)
Does this sound familiar?
The CEO has promised shareholders that performance will steadily improve across the global enterprise in the current five-year plan. Results to date have been unsatisfactory, and HQ announced a mid-course correction initiative. By year-end, division will report on measures to improve performance to global. by end of third-quarter, lines of business will similarly report to division, by end of second-quarter, departments will report to the lines of business, by end of first quarter, corporate support units will report to the respective chief administration officer and our report is due to legal and compliance no later than the end of February. By February 15, Clarisse will be expected to report to me by comprehensive plan and program to attain improved performance in the regulatory compliance organization, and Rob will handle legal affairs legislative and agency operations. Finally, Sarah will do the same for the general counsel's office by litigation, contracts, corporate disclosure, treasury support, etc.
If you are at the bottom of a food chain such as this one, you know how you will be spending your waking hours until the deadline but you will have little idea of how it applies to the little time generally left over after the initiatives of various sorts left to engage with the world outside of the corporate community. As you work in a cost center, there is nothing you can influence that will increase revenues. (However, from experience you know that the templates for this sort of exercise require you to justify that viewpoint.) As you work in defense litigation there is nothing that can be done to reduce the level of incoming cases. What can be done is to estimate the required litigation reserves less conservatively, further use lower-cost professional staff or recommend quicker settlement where that is more cost effective than the effort required to prevail through completing the entire course of litigation through final appeal. Due to the hiring freeze, nothing can be done by way of replacing existing seasoned litigators with new admittees. That leaves as the only option replacing the long-time seasoned outside counsel roster with lower-tier, lower-cost firms. It is impossible to estimate the effect of that option on overall net litigation costs in terms of reduced success rates. Regarding reducing litigation reserves, that will require negotiation with the Controller's office.
Now, assume that you and the people who work for you in your role as litigation department head are worried about the possibility of losing work to AI tools. For comfort, find an expert in prompt engineering to describe your tasks on the initiative to a chatbot and submit it to one of the more advanced engines. Should you feel anxiety that eventuality will lead, sooner or later, to the extinguishment of your job?
Probably not.
Is the fear justified?
Consider a generic formulation of the question:
You are a management consultant advising a unit of a major global corporation to recommend a solution to a problem that has been poorly defined, implicates an unknown number of organization units internal to the corporation, an unknown number of outside entity and involves uncertain costs. How would you respond to such a request for advice to provide guidance on the information needed for you to make such a recommendation?
Here is what Claude, the Anthropic.ai tool advises
If asked to provide a recommendation for a poorly defined problem implicating many unknown factors within and external to the corporation, I would respond by focusing my guidance on the key information needed to properly scope the issue and risks before jumping to potential solutions. My initial advice would include:
Defining the Problem Space:
First and foremost, I would say we need to better define the actual problem we are trying to solve. What negative business impacts does it create? What evidence lead to it being identified as an issue and when? What organizational pain points link to this problem? Clearly determining the root causes, impacts, timing, and scope provides directionality on constructive solutions.
Clarifying Affected Stakeholders:
Secondly, rather than allow assumptions about which internal groups or outside entities factor into this problem, we need details. Identifying all affected stakeholders gives us much needed perspective on the breakdowns contributing to the issue as well as considerations for solution development like subgroup needs, communication flows, authority chains etc.
Analyzing Cost Implications:
Finally, to justify dedicating resources to addressing this problem with confidence, the range of direct and indirect financial costs and risks in play require detailing. Is solving it a strategic investment priority area? What budgetary constraints exist? Shining light on the monetary impacts will inform decision making authority and payback expectation setting.
In essence, I would advise resisting the urge to propose top-down changes without bottom-up foundational issue framing. Suggest incremental information gathering to size up problems judiciously first.
Solutions are the chief cause of problems. Eric Sevareid, American broadcast journalist and commentator (attrib., perhaps quoting John Maynard Keynes).
Delegating an inadequately specified problem to an AI agent begins a conversation; it does not provide a solution. Successive iterations approach, but do not reach solution in the absence of some criterion by which to accept a solution at any stage. This is the problem of bounded rationality.
Pursuit through meta-discourse
It might be attempted to fashion a criterion by posing that conundrum as a meta-question.
After performing these steps by what criteria would you advise selecting among alternative solutions?
After properly defining the problem, identifying affected stakeholders, and analyzing cost implications, I would advise applying the following key criteria when evaluating and selecting from any proposed alternative solutions:
- Impact - The extent to which a solution mitigates the clearly identified negative impacts of the business problem across affected groups. High impact solutions directly address root cause issues.
- Achievability - How executable and realistic a solution is given organizational constraints like resources, budgets, capabilities, politics, timelines, etc. Solutions with low achievability often stall.
- Adaptability - The ability of the solution to accommodate change if aspects of the problem shift or new variables emerge. Adaptable solutions are resilient rather than rigid.
- Cost-Effectiveness - The degree to which the payoff from the solution justifies monetary, labor or opportunity costs. The ROI should exceed break-even.
- Stakeholder Buy-In - Level of acceptance across units and teams affected is key. Rejected solutions fail even if technically sound on paper. Cultural fit matters.
Essentially, the criteria focus on measurable improvement, feasibility, flexibility, efficiency and collective support. Solutions satisfying all criteria may be rare, so tradeoff analyses with transparent data become important. But these tenets allow advising leadership backed by logic, not just subjective intuition.
The call
Given the definition of such criteria and transparent data would you be able to recommend a solution?
No, having well-defined criteria and data alone is likely insufficient for me to definitively recommend a single solution. Some key additional factors I would highlight:
- Tradeoff Prioritization - The criteria may conflict in evaluating alternatives, requiring judgment calls on priorities. Is a cheaper but less adaptable option better? Input on prioritizing the decision criteria is still needed.
- Unforeseen Constraints - Data patterns from the past may not unveil constraints on solutions emerging from newer internal/external developments not in historical data. Qualitative judgment would still supplement quantitative data analysis.
- Creativity Limitations - Transparent data-driven analysis has value but still reflects past biases. Generating vs just evaluating ideas for solutions places emphasis on forward-looking thinking that data cannot fulfill alone.
- Implementation Planning - Even an optimal solution will fail if rollout, change management, aligned incentives and training are not realistically planned out upfront. The data and models do not plan execution.
In essence, while data and criteria lend valuable objectivity and discipline to comparing solution options, subjective human planning, priorities and readiness for change are integral to decisively recommending a direction. I would be careful not to overclaim the sufficiency of just data or metrics for decisively selecting solutions.
Solutions are the chief cause of problems. Eric Sevareid, American broadcast journalist and commentator (attrib., perhaps quoting John Maynard Keynes).
Having begun with one poorly defined question, we have arrived not at a solution using AI but at a number of somewhat better formulated questions. Notice that in the last round, Claude begins by claiming
But these tenets allow advising leadership backed by logic, not just subjective intuition.
but ends with
subjective human planning, priorities and readiness for change are integral to decisively recommending a direction
Such a result shouldn't be surprising.
Science is what we understand well enough to explain to a computer. Art is everything else we do. Donald E. Knuth, Foreward $A = B$,
Gödel's Incompleteness Theorem also bears on the problem. Gödel, Kurt On Formally Undecidable Propositions of Principia Mathematica And Related Systems (translated by B. Meltzer). In any logical system of arithmetic operations there exist propositions that can not be proven or disproven. At bottom AI rests on arithmetic operations.
Finally, organizational imperatives bear.
… it is manifest that there need be little or no relationship between the work to be done and the size of the staff to which it may be assigned. … [and two motives underlie this proposition] (1) "An official wants to multiply subordinates, not rivals" and (2) Officials make work for each other." C. Northcote Parkinson, Parkinson's Law and Other Studies in Administration (1957)
You are your headcount. (anon.)
Be of Good Cheer
Any job susceptible of automation through investment in data processing equipment at a lower total cost than labor either has or inevitably been or will be eliminated. Being in possession of a job eliminates the former possibility, and the foregoing considerations negate the premise of the proposition. Figuring out what to do is not susceptible to automation.