31 Comments

Great analysis that sobers us from intoxicating abstractions like "digital transformation" . My favorite idea from the post is the parallel drawn between the pace at which computers were adopted decades ago and the pace at which AI is likely to be adopted today.

I well remember that it took years to adopt electronic medical records in the psychiatry clinic where I worked - and what a piecemeal struggle it was. It's a wonderful principle: AI will be widely adopted in industry, but " at the speed of human negotiation and learning"

Expand full comment

Another interesting example of that is the transition from steam to electricity. Initially factories were steam powered and you had a big furnace that guys put coal into and all the machinery around that. Then when electric motors came around they replaced the big furnace with a big motor with minimal productivity gains. But over time they reorganised factories to suit motors more than steam furnaces and learned better work practices for motors. This then led to the higher productivity improvements.

Expand full comment

And from what I've seen that process took about 30 years!

Expand full comment

Here is the article it's the productivity paradox https://www.bbc.com/news/business-40673694

Expand full comment

This is a *great* post. I am aware of one startup that is trying to use ai in the way you contemplate here. They are building a specific ai tool, fine tuned on data that is not online, which is essentially the exhaust of certain industrial processes. (Being vague here because I don’t know how much of what they’re doing is meant to be publicly discussed.) And they have spent years developing close relationships with their target customer. I think they will be successful, but it won’t simply be due to “ai is magic”: there is a very real human element here, and no amount of silicon will overcome carbon.

Expand full comment

I'm a founder of an AI company and this is precisely how we operate. We have our own pre-trained LLM'm and we bring these to the customer's data , deep behind their corporate firewall and fine tune it on data that will never be available publicly. Although we are a young company (barely two years old) we've had some success and I'm more convinced than ever that to the extent generative AI "revolutionizes" the enterprise, it will be through these use cases where smaller models are fine tuned on proprietary data. The foundational LLM's that are all the rage now are essentially trained on the same data - i.e. the internet - and while they have impressive capabilities for generating a wide range of responses, they are generally pretty terrible if you show them data that doesn't look like something they've seen before - and, on top of this, most of the time you can't do this anyway because enterprises do not want their sensitive data leaving their security perimeter to go to a cloud based LLM.

Expand full comment

Yup, this makes a lot of sense. More or less exactly what I’ve been told by the startup I’m familiar with.

Expand full comment

Digression: "literally, he did not believe in probabilities between zero and one. yes, such people exist. he would say things like “either it is, or it isn’t” and didn’t buy it when we tried to explain that a 90% chance and a 10% chance are both uncertain but you should treat them differently."

I think that /most/ people don't believe in probabilities. Every person who justifies risking their lives because "everyone has to die sometimes", doesn't believe in probabilities. Every person who thinks "30% chance of rain tomorrow" means a guess that it will rain 30% of the time tomorrow, doesn't believe in probabilities. But even those who do believe in probabilities, mostly have weird, anti-mathematical beliefs about them.

This was my biggest fight with the biologists at the J. Craig Venter Institute. Most of them believed that combining multiple pieces of evidence "tainted" all the evidence with the impurity of the lowest-confidence evidence. That is, most of them believed that if you found one datum that said M is an X with probability .9, and an independent datum that said p(M is an X) = .7, then if you combine them, you get p(M is an X) = .7. They also believed that when a human sets probability cut-offs to create discrete categories, that's /adding/ information rather than destroying it, because the magic of "human judgement" has been added to the data. So if I did a computation that came up with p(M is an X) = .63, they forced me to throw that info away and record only that it fell within the range .6 - .9. They felt (not thought) that the human decision to pretend there was a phase transition at the numbers .6 and .9, even though everyone knew there wasn't, was adding more information than was being thrown out.

And /even among the people who believe in and mostly understand probabilities/, most of them believe you can legitimately assign p=0 or p=1 to something. This is even worse than throwing away information by using cut-offs. Then even an empiricist will commit the rationalist error of being certain.

This isn't an isolated peculiarity; it's just one place where you can observe the chasm between the empirical philosophy of the sciences, and the rationalist philosophy of the humanities and the masses. (One of the main subjects of my own forthcoming substack.)

Expand full comment

It's quite rational in a way. Probability gets massively over applied especially in business. The executive who "didn't believe" in probability almost certainly did, just not in that context. It's job hazard of those who work with data to reduce everything to numbers even when it's useless to do so. "Risk scores" are a classic example of that. The executive who gets a "risk score" is ultimately charged with making a binary yes/no decision. And critically, very few such decisions. If they decide yes and it all goes wrong, nobody but nobody is going to accept an excuse of "well this model said there was only a thirty percent chance of it going wrong". Businesses don't work that way outside of maybe professional investors where the number of similar decisions is very high. The executive is going to get yelled at and fired anyway. That's what he meant by "it either is or isn't". He carries responsibility, people who deal in abstract probabilities for individual events don't.

This combines with a second common problem, opaque models. How was this score arrived at? Probably the executive threw it in the trash because if ten people worked on it full time it was too complex to verify or even explain.

Expand full comment

In this case, it would have been strictly better to retain the probability. I wasn't "reducing" the data to numbers; the original data literally /was/ numbers. Nor were they reducing the numbers to a decision, just to a less-accurate number. And nobody anywhere along the chain was responsible for the accuracy of the final product, since we were working on a government grant to produce data, and had no clients other than people who were /also/ on a government contract and had no clients. The entire $215-million 10-year project was, AFAIK, done in the faith that "if you build it, they will come."

This last factor--the /lack/ of accountability--was IMHO what allowed the craziness to continue.

The common practice of rounding off in order to not convey the impression more precision than you have is strictly for data that is to be presented to humans. From a mathematical perspective, it does nothing but add noise to the data. If you measure the error of your results, you will always get a higher error if you round off anything. These probabilities were produced by the billions, and in production, never viewed by anyone.

Expand full comment

Right maybe so, but I was talking about the original example.

Expand full comment

Another good perspective on problems with rounding. https://dynomight.net/digits/

Expand full comment

Wow. . segmenting based on probabilities - I confess, I've not seen that one before. I spent a lot of years working in biotech and usually you would segment the data set before running your statistical model on it. For example, in some of the pipelines I worked on for analyzing genetic variation, it was common to segment the data up front on known biological traits (haplotypes, etc) or empirical data (i.e. gene expression levels vary by tissue type so segment your DNA samples based on how it was collected, etc). But saying "our priors suggest the target variable should be between 0.6-0.7 so we will throw out any estimate outside this range" seems to be completely backwards.

Expand full comment

I’ve been promised the end of my career as a software nerd for more than three decades. This is one of the reasons I’m not worried. There will always be high margin work that’s not really automateable. Well stated.

Expand full comment

Steve Eismann is senior managing director at Neuberger Berman, a global asset management firm. Recently he was on CNN, and he was handicapping AI uptake, and pointed out that Accenture, the remains of the old Anderson Consulting, has two business sectors, their regular stuff and IT consulting, and the latter is booming for them because companies want them to help with.... what Sarah said. And he also concludes that AI is gonna happen but not overnight.

Another datapoint: AI seems to be especially big in logistics. See the company hype on the Prologis website. One scenario which makes sense to me from 10,000 feet: Logistics firms get data from all over the place: weather reports, local news ("The bridge on Podunk Road will be out for 3 weeks"), in every conceivable format. And they want to put all that together, so it can feed it into their existing softwares for all sorts of tasks like scheduling and routing and loading. That sounds plausible to me because it sounds like there would be enough human in the loop to keep things sane. I think LLM's are way better at finding info about your query in giant haystacks and bringing it to you than anything else they do.

Expand full comment

Yups. In another life, working in IT, it always came back to GIGO. Integration not working after months of painstaking work- because Garbage data In, makes Garbage data Out.

Which then it usually wound up being PEBCAC/P - problem exists between computer and chair/person. 😉

Expand full comment

"but my preliminary experiments with commercial LLMs (like ChatGPT & Claude) have generally been disappointing"

IMO, that's understating it significantly. This is something that I would expect the LLMs to be good at, instead, my experience with them has made me stop trying to use them at all. It's not that I could script it faster than they do it, it's that they can't do it. They say the task is complete and give you a blank spreadsheet to download, or they insert new data in for some reason, or they do a completely different assignment.

I don't think it's a matter of getting training and development going on corporate data. The technology is fundamentally flawed from what I can tell. Inputs are only correlated with outputs in a probabilistic manner, and you can't even know the probabilities.

Expand full comment

I have personal experience with what you wrote, so I totally agree with you

My issue is how do you then sell this to other B2B clients?

I know palantir's playbook is to find companies with existential crisis and when you can talk directly to the C-suite.

What's the second best set of criteria if you canot talk directly to the C-suite?

And when you do find someone that fits the palantir criteria, how do you then talk about your previous success without saying that the clients were fucked up in some sense?

Speaking of, how do management consultants sell their previous success stories in the first place? Genuinely curious

Expand full comment

1.) if not the C-suite then the head of whatever department you’re working in.

2.) existential crises typically involve an internal failure but there’s usually also an external threat to blame. In our client, the external threat was the rise in tax fraud (to the point that there were rap songs about “going Tax on the Turbo.”)

3.) case studies! you get client permission to describe the success stories. very important class of marketing material.

Expand full comment

All absolutely true. Matches my experiences exactly and it's a great explanation of Palantirs success.

However my experience is that ai can do a great job of data cleaning when properly applied. The trick is to use it via the API for "micro tasks" that are context free. Reducing free text to a fixed set of categories, parsing dates that are written free form by people in different languages, extracting addresses from pdf's etc. It's great at this stuff. The more complex domain specific examples like duplicated sensor columns *can* work sometimes if the model has read the user guides, but you have to use the most expensive models and double check.

Basically, treat AI as an NLP library and it's incredible. Copy paste a CSV into ChatGPT and sadness will happen

Expand full comment

I'm a bit confused by this exact point. Sarah is quite right that it does not currently work to give an AI a massive dump of raw data and start asking for a predicted failure date for 10 parts.

But you are quite right that an LLM _can_ help quite a bit.

What confuses me is that the LLM can help quite a bit _now_, specifically, quite a bit _more_ than it could one, let alone two, years ago.

So I have the impression that Sarah is not trying to project the past two years forward into the next two (or three, etc) years.

Expand full comment

I don't see any fundamental reason why LLMs, either today or in the near future, *can't* standardize data formats; I'm only commenting on my personal experience that I couldn't get it to work on my use cases. It might be a prompting issue on my part.

Expand full comment

I guess I expect this to be a key to how they will transform business whilst you see it as a brake, which leads to very different expectations around timelines

Expand full comment

Thank you for the big picture on this article and you are diving into the details with accuracy.

I'd also like to mention this open-source tool that can adress painpoints explained in the article :

T6 IoT is able to breaks down data silos through data fusion and open-source flexibility, allowing seamless integration across systems. Security concerns are mitigated as data can remain on internal networks, ensuring privacy and control. Automated data acquisition, transformation, and sanitization minimize labor-intensive cleaning.

To have more details on t6 IoT : https://www.internetcollaboratif.info/

Expand full comment

Fantastic article, thank you for writing this!

I work in product design & manufacturing, and am very tired of there being "digital transformation", "Industry 4.0", or "IIoT" vendors and speakers at every single trade show I go to (even unrelated shows about materials, product applications, etc). It is one of several industrial religions! At least the six sigma/KATA/lean industrial religions have some proven benefits and can be implemented today.

The challenge of getting all of our data in one place is immense, as you lay out. Despite this, we are working on this. The idea that this is the THE challenge, the primary hurdle to overcome, is laughable. It's just the start of the race. Despite this, vendors are aggressively selling us on the idea that all they need is a massive pool of data to push our entire firm to the next level.

It's just funny to me. Who will label the columns of the csv files? Have these people ever actually understood the nitty gritty of sensors and the data they output? AI is the holy grail to these people-- it is the magic bullet which prevents us from needing more data scientists than manufacturing engineers. The sad truth is that nothing I have from LLMs leads me to believe it will fulfill this role (for all the reasons you lay out above). The very nature of LLMs make it insufficient for these tasks; all LLMs can do is generate a hypothetical answer to a solution based on a large library of similar problems. They have no capacity to actually solve a problem.

Expand full comment

the last mile problem has been around for a long

time and yet even now its is overlooked or dismissed repeatedly as a detail. Its one that can only be bridged by human dialogue ultimately

Expand full comment

I've had blazing arguments with colleagues about unstructured data - the full definition, how much of a company's overall data corpus it constitutes as a percentage, how valuable it really is, and how you can use emerging instruments to capture that value.

Would be most interesting to hear your take on that.

Expand full comment

Thank you, it was a very interesting read. I've learned quite a lot about the field that I am trying to understand

Expand full comment

I love, love this; so many people writing about digital transformation have not actually worked in the bowels of dysfunctional large companies (i.e. most of them).

The extent to which the data just isn't there for the taking, or is such a gigantic mess that there's no short term efficient way of accessing it, is in my experience dramatically underrated from the outside.

Expand full comment

Matches my experience too. It barely happens in most places, but when it does it is a massive amount of effort.

> If you’re imagining an “AI R&D researcher” inventing lots of new technologies, for instance, that means integrating it into corporate R&D, which primarily means big manufacturing firms with heavy investment into science/engineering innovation (semiconductors, pharmaceuticals, medical devices and scientific instruments, petrochemicals, automotive, aerospace, etc). You’d need to get enough access to private R&D data to train the AI, and build enough credibility through pilot programs to gradually convince companies to give the AI free rein, and you’d need to start virtually from scratch with each new client. This takes time, trial-and-error, gradual demonstration of capabilities, and lots and lots of high-paid labor, and it is barely being done yet at all.

Expand full comment