Application observability is so mainstream that even finance junkies have a high level understanding of what application alerting is through NEWR (New Relic) and DDOG (DataDog) in the stock market. Analytics teams weren’t satisfied, though. Neither New Relic nor DataDog address data quality completely—cue the explosion of data observability tooling within the last couple years.
And still, obvious errors make their way into customer-facing experiences. Consumers get emails starting with “Hello {first_name}”. While this situation is an unfortunate and blinding inconvenience, it gets much worse.
Many Salesforce Administrators maintain lead scoring formulas to prioritize account executive time while maximizing revenue. A Salesforce lead scoring formula like CASE(LeadSource, "Organic", 2, "PaidAds", 1, 0)
breaks when “Organic” changes to “Direct” and no one is notified, an error that is a direct hit on company financials. Consider organic traffic to be 30% of all leads and the most likely to convert into a sale; not reaching out to those leads first is at least a 30% hit to revenue if they are truly that efficient of leads.
🙋♀️ Definition: SaaS Observability sheds light on the health of all data and automations in SaaS tools, allowing business teams to monitor, understand, and triage issues.
Engineers get alerts when apps go down. Data teams get alerts when data doesn’t meet expectations. Why? Because they’ve been provided the tools to do so. Sales and Marketing Ops teams are misunderstood and only just beginning to get tech-forward tools to be maximally efficient. Arguably the most critical, business operations teams need to know when their automations aren’t working properly, even if the root cause is data itself.
Ownership falls on operational teams
Customer success teams get frustrated when they find bugs in a user flow, as they should—they don’t own the application. Marketing teams get frustrated when they have to point out funky data making its way into Hubspot on behalf of the data team, the owners of the data itself. Similarly, sales teams don’t want to be in a position where the Director of Finance is mysteriously wondering why new revenue is down 30% over the last 6 months.
Everything Salesforce falls under the sales team, no matter the root cause. Consider this: a data flow built by a data engineer you’ve only talked to once in your life is the specific flow responsible for changing “Organic” to “Direct” to describe leads that aren’t attributed to a source. Should this change have been communicated outside of the data team? Yes of course. Will it? Eh, having led and consulted several data engineering teams, that’s a 50/50 bet.
❌ Problem 1: Business teams cannot rely on coin flip chance to ensure their tools are behaving as expected.
While engineers have an engineering mindset, analytics folks have a data mindset. The infamous modern data stack has tooling to connect to APIs, bring data into a warehouse, transform the data to draw insights, and report on it. While this stack combined with a data observability tool may diagnose the issue at hand, business operations should not need to be proficient in SQL, dbt, or data pipelines.
SQL is beneficial in any position, sure—sales ops, marketing ops, customer success ops, generalized business ops, you name it. However, none of these business ops teams should need to rely on a data team to catch bugs in flows ops teams themselves create. If ownership of Salesforce lead scoring models is in the sales team, why shouldn’t they be able to build them accurately and efficiently without involving half a dozen other tools?
❌ Problem 2: Existing tooling for business teams is far too tech-heavy to get an answer on SaaS Observability.
This is not a problem requiring machine learning, statistics, forecasting, etc. SaaS Observability is a problem of monitoring existing SaaS configurations and ensuring they are correct in an environment where inputs are constantly changing.
Two key elements of SaaS Observability
Running with the Salesforce lead scoring example, an issue can be both caused and diagnosed in two different ways: data quality and expected behavior.
🎯 Having bad data quality in SaaS tools means information on objects like leads or tickets is missing or unexpected.
Maybe to no-one’s surprise, data quality in SaaS tools and in data pipelines means the same thing. The urgency, however, is much higher in the operational case where there’s revenue left on the table. Anything built under false assumptions will not meet its maximum potential. If you, a Salesforce Administrator, see data on leads that doesn’t meet specific expectations of yours, any automations built under assumptions about these leads that don’t meet reality won’t perform as expected.
A foundation of good data quality is a start but by no means the end. The SaaS debt problem is direct result of complex business logic built inside of UI-based tools with very few checks in place. Business teams are put under too much pressure to ship quickly, without the time to build in audits and processes to ensure all their hard work scales effectively.
🎯 The behavior resulting from erroneous automations in SaaS tools displays anomalies that can be monitored and tested for.
Behavior can take many different forms depending on the application. In sales lead scoring, it could be the percentage of leads that are top rated. In a marketing email campaign, it could be the percentage difference day over day of emails sent. Finding anomalous behavior may involve a bit more statistics and number crunching as we’re dealing with time series data.
Sales isn’t the only team with automations, nor are they they only business team with high impact bugs.
For any type of issue, building scalable automations shouldn’t involve data teams. Business teams should be self sufficient in building scalable systems within the tools they use.
Get the full picture and act quickly
To solve any problem, you have to:
know something is broken in the first place,
know why it’s broken,
know how to fix the root cause.
When it comes to SaaS Observability, business teams serendipitously stumble upon issues. With this approach, there’s no real way to truly know where problems lie. Some teams choose to do a quarterly audit, occupying 3+ people for over a week. This is a colossal waste of time.
What if a SaaS audit was done continuously, proactively alerting business teams of issues as they occur? As a first step, knowledge helps but understanding is not enough. To truly affect the business, a problem must not only be found but also triaged and quickly fixed. While data teams have started to find this path to data observability, business teams don’t have the tools to do so efficiently.
SaaS tools are complex systems and must be scaled for the sake of all business operations—cue SaaS Observability tooling.
Thanks for reading! I talk all things marketing, analytics, and growth. Don’t hesitate to reach out.
This is such a hard problem to solve but if someone manages to do it well, this is also a huge opportunity. I once pitched "Automation Audit" as a service because I was so sick of receiving bad communication from really good companies.