Why the adjacent user might be the most important one in your analytics stack.
The Adjacent Analytics User
Came across an article this week on the “adjacent user theory.”
The Adjacent Users are aware of a product and possibly tried using the it, but are not able to successfully become an engaged user.
This is discussed by Bangaly Kaba who used this theory in expanding the userbase through dramatic growth at Instacart.
If we agree we must treat our internal analytics teams, services, and platforms, as products, we need to focus on all of our users, not just our power users, not just executives etc. When I began to read about the “adjacent user theory,” the theory that you have users, who aren’t active, that you are somehow missing the mark. I was struck that these are the users we most often miss as internal analytics teams.
Think about a data model in your warehouse, a dashboard, a predictive model etc. Your team works really hard to solve a problem for a core few users. They max out their utility, and we may wonder why the best-project-in-the-history-of-analytics (this is all analytics projects amiright?) didn’t reach or become crucial to the entire org.
Why is this? By thinking that our “products” we develop for power users will scale once we reach full “market penetration,” we fail to consider the adjacent user’s journey. For instance, in the article, it talks about how an Instacart user’s experience in Kansas City has to be different than a person in the Bay Area.
By taking notes from major growth stories we can begin to apply these concepts to our team. This is an often ignored problem that might just be THE problem worth solving!
Customer or Stakeholder or Partner?
I have long held the believe that for internal data and analytics teams, the word “stakeholder” should be absent from a teams vocabulary. “It sounds good” or ‘It implies they have stake in the game” is what has been pushed back to me. However, stakeholder implies something deeper, almost a power struggle at times.
John Cutler makes the argument that internal product teams should refer to their internal counterparts as partners, not customers. He says:
But in most cases they aren't your customers (in some companies, money does change hands, but that is an exception)! They are your partner in the company's quest to help external customers. Partners collaborate and don't lob requests. Partners trust each other's expertise. The power dynamic is different between partners.
As an analyst, you have probably been posed the question “Can I have a dashboard with a gigantic wall of charts, because I know I need this analytics thingy, but don’t want to think about how to measure?” OK, maybe not, but you get the idea. By telling the team for with whom you are partnering that they are customers or stakeholders, it all of a sudden becomes a transactional relationship. You do this for me, I give you this $$ back. In reality, great analytics team are working along side their product, design, supply chain, purchasing, finance…..all the teams…..to make better decisions, inform better outputs, and provide better services to OUR customers.
It is simple, but start with calling the internal teams for which you are producing a service or platform - partners.
How do we build this data model thing?
As an experienced data professional, I have seen many waves of change, vendor marketing and positioning, and half baked advice from the masses. One pervasive theme that keeps popping back up is the need to write everything to ONE BIG TABLE. A really great talk was given by a few Consultants at DBTs conference - coalesce.
One Big Table is a concept developed to leverage the power of new, modern data stack, data warehouses that function less efficiently on many joins. So, why not one big table?
In the past, you would create table (using Kimball’s DW Methodology) called dimensions and facts. This could then be “normalized” to reduce duplication of data. Most people see the physical model (dimensions and facts) as Kimball. However, Kimball advocated for a three layer modeling approach -
Conceptual
Logical
Physical
By creating conceptual and logical data models, it helps you modularize your code (in dbt, or your data flows in your favorite visual data transformation tool). More importantly, it models the business relationships that exist for which you are trying to analyze. By skipping this modeling step and jump straight to a gigantic query which produces one big table, we lose the usability of our physical data assets we eventually develop.
Don’t lose sight of this. The most important part of data modeling is not the physical representation you create in your data warehouse, but how others access and utilize this information to make decisions. Kimball Methods aren’t dead, they still should be a crucial tool in your toolbelt!
Please know that all opinions expressed here are mine, and not representative of my family, friends, employer etc. I would love get feedback on what is useful and what is not. Shoot me a message on LinkedIn. Last year, I posted more consistently on LinkedIn and had many people reach out giving me a pat on the back for helping them see something differently or learn. These keep me going!