Analytics engineering Unwrapped
2024

UNWRAPPED

UNWRAPPED

UNWRAPPED

UNWRAPPED

UNWRAPPED

UNWRAPPED

UNWRAPPED

UNWRAPPED

UNWRAPPED

UNWRAPPED

UNWRAPPED

UNWRAPPED

UNWRAPPED

UNWRAPPED

UNWRAPPED

UNWRAPPED

UNWRAPPED

UNWRAPPED

UNWRAPPED

UNWRAPPED

0.
Executive summary

0. Executive Summary

  • AI-powered tools are transforming analytics workflows.
    • Teams using AI features save an average of 165 hours annually and upwards of $50,000 in costs.
    • Power users of AI-driven tools like Paradime’s save organizations nearly $12,000 in costs annually on test generation alone — more than twice as much as peers who follow standard usage patterns.
    • AI-assisted commit message generation is the most frequently used of the AI-powered productivity features, while test generation delivers the largest time and cost savings.
  • A single daily job run is the best balance of cost and data demands.
    • 75% of projects run at least one job daily, with only 13% running jobs more frequently than hourly.
    • 50% of all pipeline runs across projects are scheduled daily, while only 4% run hourly or more frequently.
  • Testing coverage is low across projects, especially as they scale, indicating room for improvement.
    • Only 8.7% of models across projects have at least one test.
    • Test coverage — average percentage of models with at least one test — drops from 18.5% in small projects (1-50 models) to 8.2% in large projects (500+ models).
    • Coverage drops further if looking for at least one test that is not a non-null test.
    • Even among models with tests, column coverage averages only 2-7%.
  • Materialization strategies vary by project size, indicating that larger, more mature projects tend to adopt, at higher rates, measures for efficiency and cost-optimization.
    • Adoption of incremental models peaks at 13.89% for projects with 101-500 models.
    • Use of incremental models increases with project size up to 500 models, then declines slightly for larger projects.
    • Use of Snapshot models steadily increases with project size, reaching 2.68% for projects with 500+ models.
    • Command structure remains relatively simple, with analytics engineers overwhelmingly focused on a handful of simple dbt™ commands.
    • A significant proportion of commands in production jobs use neither the select nor exclude flags (though the percentage of commands using one or both of the flags tends to increase with project size).
    • In the future of analytics engineering, teams will need to:
    • Demonstrate tangible ROI on data initiatives.
    • Leverage AI to enhance productivity and data quality.
    • Optimize data pipelines for efficiency and cost- effectiveness.
    • Balance innovation with pragmatic, widely adopted best practices and attention to more comprehensive measurement.
As analytics engineering evolves, professionals must adapt to new challenges and opportunities, focusing on quantifiable business impact while leveraging emerging technologies to drive efficiency and data quality.
I.
The new analytics engineering

I. The New Analytics Engineering

The field of analytics engineering is undergoing rapid transformation, driven by the increasing strategic importance of data, a push for analytics engineering teams to demonstrate clear ROI, and the growing adoption of AI-powered tools to drive productivity and best practices.
In the background, three interrelated trends are creating the tailwinds pushing analytics engineers to engage more deeply with business problems.

1.1 Demonstrating value and ROI

Analytics teams are under growing pressure to quantify their impact on the business. It’s no longer enough to build robust data pipelines or enable sophisticated dashboards. Analytics engineers must show how their work translates into tangible benefits.
“What I care about is the business value,” says Alexandre Carvalho, director of engineering at Stenn and former director of data at Checkout.com. “It’s the impact the team is having. How do we get there? Yes, it’s important. I’m an engineer, I did computer science, and I love the engineering part. But the thing that really matters to me is the business impact.”
Budget scrutiny and a demand for hard numbers have only become more prevalent since the tech downturn of 2023. Analytics engineers are increasing pressure to demonstrate greater efficiency in data pipelines and effective use of limited resources.

1.2 Rising influence of data leaders

As data becomes central to business strategy, data and analytics leaders are taking on more prominent roles. Many now report directly to C-suite executives, including CEOs, COOs, and CFOs. But to become true partners in driving organizational success, analytics engineers must balance the traditional builder’s mentality with hard-nosed business sensibilities and an ability to measure the value they deliver.

1.3 Unleashing the power of AI

While downstream generative AI apps may have upped costs, AI-powered tools are also problem solvers. They are already having a measurable, real-world impact on the productivity of analytics engineering. Our analysis of Paradime data backs up anecdotal evidence that teams are adopting AI-powered tools for tasks like test generation, model correction, and inline assistance. Those that do are saving hundreds of hours and thousands of dollars annually. The evidence points to benefits not just in cost savings and productivity but also in the broader adoption of testing and other best practices that have direct impacts on data quality.

Users of AI-powered productivity features save $50k on average

Average annual cost savings from productivity gains in dbt™ and SQL in Paradime using DinoAI Copilot
*For explanation of time and cost saving calculations please see Methodology section in the Appendix.
On average, teams are saving more than $50,000 in annual costs across the eight principal Copilot-style features within Paradime. By measuring the dollar impact of AI-driven tools with precision, analytics engineers are able to demonstrate efficiency gains and cost savings in a manner that is legible across an organization.
In sum, as organizations increasingly rely on data for decision-making, analytics engineers are elevating their roles and becoming strategic partners in driving business value. AI-powered tools promise to be accelerants in this trajectory, but in many instances it has been difficult to benchmark adoption and gauge business impact.
Here, we have unwrapped exclusive data alongside insights from industry leaders to shed light on these changes.
We’ll identify trends, best practices, and future directions for the field. Along the way, we will bust myths but also reaffirm some uncomfortable truths. We’ll even provide instructive comic relief in the form of the most common dbt™ “bloopers.” Most importantly, we’ll provide hard data for teams seeking comparisons and benchmarks to guide their own efforts.
II.
Leveraging AI in analytics engineering

II. Leveraging AI in Analytics Engineering

Artificial intelligence is rapidly transforming analytics engineering just as it is the work of all software developers.
New tools are automating routine tasks, driving enhanced data quality, and fixing errors on the go.
Paradime’s data reveals the growing adoption of AI-powered features among analytics engineering teams, as well as the benefits in time and money saved. It also reveals disparities in the extent to which these tools are being used. Teams using Copilot-style tools at a higher frequency are seeing a disproportionate share of the benefits.
To illustrate the gap, we’ll compare the behavior and outcomes of two different cohorts:
  • One we define as “power users” of AI-driven tools. This cohort groups together all the users that have performed a given AI-assisted action at least ten times in the period under analysis (the last ~18 months).
  • The “standard users” are those that have used the specific AI-powered feature at least once, but fewer than 10 times.

Power users of AI-driven productivity features save up to 4x more time in dbt™ and SQL

Annual hours saved in Paradime DinoAI Copilot across commit message and test generation, inline assistance and error correction
(c) 2024, Paradime Labs, Inc.
Paradime power users are those who have adopted each action and have performed it 10 or more times. For details please see Methodology section in the Appendix.
The data represents annual time saved, on average, by a user of each type, classified according to their overall frequency of using AI-powered features. These time savings compound year after year with uptake across teams.
Obviously, the greater usage frequency of power users will mean that they absorb a greater proportion of the time and cost benefits, compared to the other cohort.
However, the insight here is the extent to which the power users outpace standard users in saving time and money.
The time savings are significant:
  • On average, power users of Paradime’s AI-driven productivity tools are saving 165 hours annually in generating dbt™ tests, compared to less than half as many hours saved by standard users.
  • Power users are saving nearly 90 hours annually with inline assistance, more than 4x the time being saved by standard users.
Using a baseline of how long certain tasks take for a human employee, it’s straightforward to calculate how much time is saved with features like auto-generated commits and tests, model fixes, and autocomplete. The final step is to use average analytics engineering salaries to calculate the time savings in cost terms:
  • Power users of the AI-driven productivity features are saving their organizations nearly $12,000 annually in test generation costs,
  • Around $6,350 annually in inline coding assistance,
  • And nearly $2,000 in model scanning and error fixes.

Power users of AI-driven productivity features save up to 4x more in annual costs

Annual hours saved in Paradime DinoAI Copilot across commit message and test generation, inline assistance and error correction
(c) 2024, Paradime Labs, Inc.
Individual power users save an average of $1,000 or more from generating tests, inline assistance, and model fixes with the help of AI.
This data suggests analytics engineering teams should move beyond an informal experimentation phase and integrate these new AI-powered tools directly into everyday workflows.
“Using AI to help write code is going to be massively important from a productivity perspective,” says Alexandre Carvalho, director of engineering at Stenn. “I think that the best engineers will be those that are most effective in using AI. I don’t think people should fear that AI will take their jobs. I think people should fear not leveraging AI to do their jobs quicker and better.”
Of course, specific tasks require more time to complete than others, so time savings and cost savings do not necessarily correlate with how often these tools are used.
Test generation ranked only fifth in terms of overall frequency of actions but generated a disproportionate share of the time and cost savings. In other words, because of how time consuming human-driven, manual test generation can be, test generation is a particularly high-leverage task for AI.
On the other hand, commit messages are not particularly time-consuming to generate, but AI-powered commit message generation was the most frequently used AI feature, likely because it is tedious for engineers to document each change they make. As any developer will tell you, in the absence of AI-powered automation, these messages are often left blank or completed with generic or even arbitrary information.
AI-Powered Productivity Feature
Usage Frequency Rank
DinoAI Generate Commit Message
1
DinoAI Inline Assistant
2
DinoAI Fixed dbt™ Model
3
DinoAI Autocompletion
4
DinoAI Generate dbt™ Tests
5
DinoAI Explain dbt™ Model
6
DinoAI Create dbt™ Model
7
DinoAI Generate Elementary Tests
8
AI-Powered Productivity Feature
Usage Frequency Rank
DinoAI Generate Commit Message
1
DinoAI Inline Assistant
2
DinoAI Fixed dbt™ Model
3
DinoAI Autocompletion
4
DinoAI Generate dbt™ Tests
5
DinoAI Explain dbt™ Model
6
DinoAI Create dbt™ Model
7
DinoAI Generate Elementary Tests
8
We’ll return to testing and documentation again to highlight the opportunity to improve data quality.
III.
Analytics Engineering Benchmarks IRL

III. Analytics Engineering Benchmarks IRL

To understand the current landscape of analytics engineering, we analyzed real-world usage data from Paradime users, focusing on their interaction with dbt™ and other analytics tools. This analysis reveals both dominant patterns and more advanced or emerging practices in the field that can be distilled from looking at larger or more complex projects (the number of models in a project or the number of maintainers can be considered proxies for maturity and complexity).

3.1 Frequency of data pipeline runs

There is a lot of chatter around streaming data and high-frequency data ingestion. That said, 75% of projects are running at least one job at a daily frequency, and only 13% are running a job at a frequency higher than hourly. This distribution implies that daily cadences still dominate at most organizations.

Nearly 75% of projects run production jobs on a daily frequency

Despite buzz around real-time and streaming data, a significant majority of projects rely on daily jobs. Only 13% use frequencies higher than hourly.
The percentage of projects running jobs at frequency peaks at the daily cadence, decreasing sharply as frequencies rise from several times a day to hourly and approach high frequencies approaching truly real-time data pipelines.
For context, dbt™ projects in Paradime average between 2 to 13 production jobs, depending on their size (as measured by the number of models). Projects with up to 50 models average two jobs, while those with 201 to 500 models average 13.
When looking across projects at the distribution of production job frequency in aggregate, a similar picture emerges.
  • 50% of data pipeline runs are scheduled to run daily
  • 22% run several times a day
  • 11% run monthly
  • 7% run weekly
  • 6% run several times a week
  • 2% run hourly
  • 2% are high frequency, i.e., scheduled in real-time or near real-time (e.g., every few minutes)
In other words, over half, or 72% of total jobs, are set to run daily or several times a day. Meanwhile, 24% are set at a frequency of several times a week or lower — i.e., several times a week, weekly, or monthly.
Less than 1 in 20 jobs, 4%, are scheduled hourly or more frequently (e.g., every few minutes).
This data implies that the cost of running pipelines daily (or less frequently) exists in balance with most organizations’ data needs, and that higher frequencies would not deliver ROI to justify the added cost and overhead at many companies.
That’s not to say that streaming and high-frequency data pipelines are not important in specific industries and use cases, as we learned from several data leaders. There’s growing adoption of higher-frequency and real-time data processing in industries like hedge funds and asset management.

3.2 Testing

Our analysis of Paradime user data reveals significant gaps in testing across analytics engineering teams. These insights shed light on current trends and point to areas where AI might be leveraged to improve data quality.
  • Overall test coverage is low, with an average of only 8.7% of models across projects having at least one test.
  • Test coverage — or the percentage of models with at least one test — is inversely related to project size:
    • In smaller projects with 1 to 50 models, an average 18.5% of those models, or nearly 1 in 5, had a test.
    • In medium-sized projects with 51 to 200 models, that average falls to 10%.
    • In projects with 201 to 500 models, coverage continues to drop to an average of 8.8%.
    • In projects with 500+ models, the average is 8.2%.
    • The same relationship holds for other test metrics, including percentage of models with tests other than non-null tests and percentage of columns with tests. Coverage drops in a stair-step fashion as project size increases, with the biggest drop-off in coverage coming once projects grow larger than 50 models.
The fact that test coverage drops with project size isn’t surprising in and of itself. As projects scale in size, it’s difficult for engineers to keep pace and sustain the generation of tests. AI-powered test generation can help close this gap, but we’ve seen that test-generation in AI is still not as popular in usage terms compared to other AI-powered tools.
However, if we accept project size as a proxy for data project complexity and/or maturity, the drop-off in test coverage is worrisome. The implication is that the more mature and more complex the data project, the higher the proportion of models relying on untested data.

Test coverage for fewer than 10% of models across projects

Across all projects on Paradime.
(c) 2024, Paradime Labs, Inc.
Across all projects in Paradime, only 8.7% of models have at least one test.
Another insight is that If we look for tests beyond fundamental non-null tests, the proportion of test adoption drops further. For example:
  • In small projects with between 1 to 50 models, an average of 18.5% of models had at least one test, but that average dropped to 13.9% for models with at least one test that was not a “not-null” test. That’s a significant drop, meaning that only roughly 14 in 100 models, on average, had testing beyond non-null tests.
  • Among projects with 201 to 500 models, an average 8.8% of models had at least one test, but an average 7.5% had at least one test that was not a “not-null” test.
Not-null tests, which gauge the proportion of data in specified columns that are not nulls (vs. null values), are often the first type of test implemented in a data project. They serve as a starting point for more comprehensive testing. As with overall coverage, testing beyond non nulls declines with project size, ranging from an average 14% of models in small projects to only 7% in extra-large projects.
Moreover, when looking at average column coverage, coverage also drops significantly as project size increases. In small projects, only 7% of columns have tests. In medium to extra-large projects, an average of 3.5% or fewer columns are tested.

Testing coverage declines with projects size, with a steep drop-off beyond 50 models

Data underscores the need for test generation assistance, particularly as projects grow in size and complexity.
Testing coverage declines steadily with project size, with only 7% of projects with more than 500 models including tests other than non-null tests.
Putting all the test data together:
  • On average, a low percentage of models have any testing coverage at all, and the larger the project is, the lower the average test penetration (e.g., in projects with 500+ models, an average of only 8.2% of models have tests).
  • Among models with tests, a significant proportion are running only non-null tests.
  • Column coverage averages between 7% for small projects (those with up to 50 models) and 2% for extra-large projects (which have 500 or more models). This means that even in models with tests, only a fraction of the data is being validated.
The implications for data quality are obvious: This low average test coverage at the model and column level, especially in larger projects, suggests many organizations are leaving broad swaths of their data untested.
While teams will doubtless focus tests on the most crucial models and columns, any gaps increase the risk of errors propagating through data pipelines and negatively impacting business decisions.
Further, the drop in test coverage when looking beyond non-null checks suggests there’s a significant reliance on the most basic tests.
AI-powered test generation, if it becomes more widely adopted, presents a clear opportunity for improving test coverage.

3.3 Materialization strategies

Our analysis of Paradime usage data reveals interesting patterns in how organizations leverage different materialization strategies. These strategies play a crucial role in optimizing data pipeline performance and resource utilization.

3.3.1 Incremental models: Balancing efficiency and complexity

Incremental models update only new data rather than performing full rebuilds, resulting in meaningfully shorter runtimes. The strategy is particularly impactful with resource-intensive transformations since it allows for the transformation to focus only on new records.
Adoption of this strategy shows a nuanced pattern:
  • Small projects (1-50 models): 5% of models
  • Medium projects (51-200 models): 11% of models
  • Large projects (201-500 models): 14% of models
  • Extra-large projects (500+ models): 5% of models

Adoption of advanced materialization strategies increases with indicators of project maturity and complexity, like number of models – except in very large projects

In dbt™ projects in Paradime that have 201 to 500 models, ~14% of models are incremental
Incremental materialization is used in roughly 9% of all models across projects but in a greater proportion (14%) in large projects with between
201 and 500 models.
The pattern holds when employing a different indicator of project maturity and complexity — the number of maintainers on a project.

Adoption of advanced materialization strategies tends to increase with indicators of project maturity and complexity, like number of maintainers

In dbt™ projects in Paradime that have 11 to 15 maintainers, ~14% of models are incremental
Incremental materialization usage grows with indicators of project maturity and complexity, like the number of maintainers — up to a point, dipping lower for projects with the most maintainers.
This “inverted U” pattern reveals that while incremental models become more prevalent as projects grow, there’s a slight decrease in adoption for the largest projects.
This could be due to:
  • Complexity threshold: At extreme scales, the overhead of configuring and managing many incremental models might outweigh their benefits.
  • Alternative strategies: Very large projects might employ different optimization techniques or architectural patterns to manage costs and drive efficiency.
  • Diverse use cases: Extremely large projects might serve a wider variety of needs, some of which may not benefit from incremental models.

3.3.2 Snapshot models: Tracking historical changes

Snapshot models capture point-in-time data for historical analysis and are often used to track changes in data over time. While less widely used across the board, Snapshots show a clear trend of increased adoption as project size grows:
  • Small projects (1-50 models): 0.35% of models
  • Medium projects (51-200 models): 0.85% of models
  • Large projects (201-500 models): 1.27% of models
  • Extra-large projects (500+ models): 1.85% of models
This pattern suggests that as projects become larger — and likely more complex and mature — the need for historical tracking of slowly changing dimensions becomes more critical.
For example, larger projects often face stricter data governance and auditing requirements, which Snapshots can help address by providing point-in-time views and facilitating data lineage analysis. As well, larger and more mature projects are more likely at larger firms, which may have a propensity to do the sophisticated lead tracking and historical backtesting that Snapshots support.
Snapshot usage increases with dbt™ project size
Nearly 2% of models in “extra-large” projects with 500 or more models use Snapshot
Snapshot usage is positively correlated with the number of models in a project.

3.3.3 Ephemeral models

Ephemeral models are not built into databases and do not persist, so they allow companies to perform lightweight transformations “in flight” and avoid cluttering their data platforms. These models can’t be queried, so they are limited in their downstream utility. Not surprisingly, ephemeral models don’t see adoption in small projects with up to 50 models but otherwise roughly follow a version of the “inverted U” pattern noted earlier with incremental models.
  • Small projects (1-50 models): 0% of models
  • Medium projects (51-200 models): 2.03% of models
  • Large projects (201-500 models): 6.82% of models
  • Extra-large projects (500+ models): 0.97% of models
  • Average across projects: 2.6% of models
Large projects of between 201 and 500 models have an average of nearly 7% ephemeral models, roughly three times more than the percentage seen in medium-sized projects. Prevalence of ephemeral models drops off to less than 1% in the largest projects, very likely for similar reasons to the tapering-off seen at that project size for incremental models.
3.4 Optimizing production jobs
Another way to look at optimization is to see whether analytics engineers are refining their jobs to minimize the unnecessary use of resources.
The select and exclude flags are used to optimize data pipeline runs by allowing users to run only specific parts of their project, rather than the entire project. This targeted approach translates to significantly reduced execution time and processing costs, especially in larger projects.
However, these flags aren’t always used to the extent one might expect.
“People tend to use just dbt run as a whole with no selectors and no way of refining the set of commands they want to run,” says Alexandre Carvalho at Stenn. “That’s actually pretty bad practice. The implication is that you’re rerunning the whole project, and there are significant costs and time associated with that.”
In the data, we see a marked tendency toward increased usage of selective runs as project size grows. Of course, large projects have more of a need for these flags, as there’s more to potentially select or exclude. Still, the data suggests that there are unseized opportunities:
  • The exclude flag, on its own, is essentially unused across projects with up to 500 models, and appears in only 4% of commands in extra-large projects with more than 500 models.
  • In large projects, 1 in 10 commands are running without either flag. In extra-large projects with more than 500 models, 23% of jobs are running with neither flag.
Snapshot usage increases with dbt™ project size
Nearly 2% of models in “extra-large” projects with 500 or more models use Snapshot
(c) 2024, Paradime Labs, Inc.
Optimization tactics, like use of the select and exclude flags in commands, are more widely used in larger projects, but many jobs still run with neither flag.
To put this data in context, the average number of commands per job also varies by project size, peaking at roughly 3 commands in large projects with 201 to 500 models:
  • Small projects (1-50 models): 1.8 commands
  • Medium projects (51-200 models): 1.2 commands
  • Large projects (201-500 models): 3.3 commands
  • Extra-large projects (500+ models): 2.4 commands
The data suggests that most dbt™ jobs, regardless of project size, tend to use relatively simple command structures. Even in medium projects, which can contain as many as 200 models, the average number of commands is hardly greater than one.
3.5 Terminal commands
Further underscoring the heavy reliance on the most common terminal commands, an analysis of actions on Paradime shows that more than two-thirds of command executions are for dbt run. That means dbt run is roughly seven times as used as the next most common terminal command, dbt compile, with 9.8% of executions.
Newer commands, like dbt list, see small shares of overall usage at less than one-tenth of a percent.
Terminal command
% share of executions
dbt run
68.196%
dbt compile
9.771%
dbt build
8.231%
dbt test
5.743%
dbt deps
1.746%
dbt run-operation
1.620%
dbt seed
1.472%
dbt snapshot
0.977%
dbt debug
0.846%
dbt ls
0.632%
dbt source
0.219%
dbt clean
0.207%
dbt docs
0.170%
dbt list
0.066%
dbt parse
0.030%
dbt init
0.027%
dbt clone
0.026%
dbt retry
0.010%
dbt show
0.006%
Other
0.005%
There is a strong “Power Law” distribution in favor of dbt run, in particular, but also dbt compile, dbt build, and dbt test. Together, these four terminal commands account for more than 90% of executed commands.
On the other side of the Power Law distribution, the rest of the 15+ commands together account for only around 8% of actions. For example, dbt run-operation, which invokes macros, only accounts for 1.6% of terminal commands. It’s difficult not to suggest that analytics engineers could be exploring less common features more aggressively.
3.6 dbt™ bloopers
We refer to command line and other mistakes in dbt™ as “bloopers,” and some are head scratchers. They do offer an interlude of amusement amid a lot of dry data and charts.
Some are comprehensible, for example, dbt runoperation (missing the dash in “run-operation”) and dbt seeds (not plural!). Others less so: dbt json does not closely resemble a command in dbt™ (or any other technology) we can think of. What’s surprising is the frequency with which some users are typing these commands. While dbt runoperation generates an error, it has been tried more than 6,000 times. The mysterious dbt json? Nearly 2,000 times.
From head-scratchers to crossing wires with Git, we hope you enjoy the comic relief!
Unrecognized command!?
The most common dbt™ bloopers on Paradime. Users tried to run dbt runoperation over 6,000 times.
IV.
Quantifying Success in Analytics Engineering

IV. Quantifying Success in Analytics Engineering

As analytics engineering matures, the need to measure and communicate its impact has become paramount. Organizations are developing sophisticated metrics and methods to quantify the success of their data initiatives, even as these adapt to the growing prominence of downstream AI/ML applications. While specific metrics, KPIs, and objectives will vary from organization to organization, there are a few common themes we can tease out from the evidence.

4.1 Traditional measurements are outcome- or output-based

Analytics and data engineering teams have historically been goaled on specific downstream output and outcomes rather than efficiency or productivity — or business impact.
A typical list of outcome- and output-oriented metrics might include:
  • Stakeholders’ adoption of downstream analytics and visualization products and their satisfaction.
  • Metrics focusing on data availability and quality. Data quality might be tracked via KPIs around data freshness, accuracy, and completeness.
  • In highly regulated industries, compliance-tied metrics might also be important.
  • These outcome- or output-based metrics might then be complemented with scalability measures. For example, KPIs designed to show that metrics around quality are not deteriorating as data volumes or the number of data models increase.
To give a specific example, data quality KPIs might be derived from simple evaluations like the aforementioned non-null tests, which determine whether the proportion of non-null (vs. null) values in target data table columns falls within an acceptable range.

4.2 More focus on efficiency and productivity

Increasingly, however, analytics engineers are focusing on an additional dimension: efficiency in the form of productivity gains and cost savings. As already mentioned, there’s top-down pressure in this direction as companies seek to become more cost-effective. But there’s also a bottom-up influence thanks to the broader availability of AI-powered productivity tools and ROI-linked metrics. Platforms like Paradime are democratizing AI-powered automations and popularizing measurements oftime and cost savings.
At the same time, the adoption of AI-powered tools in analytics engineering allows for a clear point of comparison: before and after AI-assisted automation.

4.3 Demand for comprehensive, ROI-based measurements

Analytics leaders are increasingly focused on tying their work directly to ROI. Soon the day will come, and it may have already for some, when data leaders will have to answer for its cost to the company.
This trend toward multi-dimensional ROI in data work requires a delicate balance between cost control and value creation. While it remains difficult to draw a straight line from a specific data asset or pipeline to revenue impacts, the preoccupation with ROI has certainly nudged data leaders in that direction.
Starting with an eye toward cost-savings, data practitioners are assessing their roadmaps not just in terms of what they’ll build but also through the lens of financial impacts.
V.
Conclusion: Paths to Efficiency, Productivity,
and Data Quality

V. Conclusion: Paths to Efficiency, Productivity, and Data Quality

All companies want to save time and money and see a greater return on investment from their data assets. We’ve reviewed dozens of tactics that analytics engineers already have at hand to make that happen. These include AI-powered tools, advanced materialization, and smarter use of dbt™ commands.
But, data quality also drives ROI. Few would argue with the statement that consistent and accurate data improves business agility and decision-making. The data needs of downstream AI/ML-based applications have further raised the stakes on data quality.
“I see a lot of companies are pushing on AI, and they realize they don’t have the data quality to do AI,” says Alessio Roncallo, director of BI and analytics at Zego. “AI works best with clean data. In the next few years, companies will place a lot of emphasis on cleaning data for AI.”
In data modeling, time pressure and business demands often push analytics engineers to cut corners and skip simple actions known to improve data quality, such as writing documentation and creating metadata.
“I value data quality very highly. It’s absolutely [crucial] for a successful data function,” says Alexandre Carvalho at Stenn. “The problem is that the incentives are not aligned. The people who need to create the metadata and document the data are not the ones that are actually going to benefit from the documentation.”
It’s a powerful insight. The same incentive misalignment applies to testing. There’s no question that expanded test coverage results in improved data quality and reliability. But downstream stakeholders are not the ones tasked with writing countless unit tests or designing comprehensive testing schemes for ballooning projects.
AI-powered analytics engineering will untangle this incentives problem because it allows for documentation creation and test generation with a click, even across sprawling datasets. That ease removes all the friction from the equation, clearing away the barriers that lead to sparse and unhelpful documentation or paltry test coverage. With AI-powered tools, it will becometrivial to add the right tests to the next data model or column andgenerate documentation for a new data table.
We’ve seen how commit message generation and AI-powered test generation have already saved analytics engineering teams hundreds of hours and thousands of dollars. These tools will only become more powerful in the future. Measurement will also improve, including the ability to design metrics and optimize across dimensions like data quality and consistency.
At Paradime, we’re focused on delivering a vision of comprehensive ROI measurement for all your data assets and data work. What we’ve seen in 2024 is only the beginning.
VI.
Appendix - Methodology

VI. Appendix - Methodology

This year’s Paradime Unwrapped report is based on tens of millions of anonymized and aggregated data points from user actions on the Paradime platform. For charts and tables, the time range under consideration covers the trailing 24 months, with individual analyses encompassing shorter intervals based on data availability. For example, DinoAI, Paradime’s copilot, went into wide public release in June 2024, so data points related to leveraging AI stretch back less than a year.
Power users of AI features are defined as those who have used specific AI-powered productivity features at least ten times, while standard users are those who have used features at least once but fewer than 10 times. Several analyses are based on assessing how certain behaviors vary with project size, with projects bucketed in size bands according to the number of models they contain. We also performed similar analyses grouping projects according to the number of maintainers.
Time savings are based on calculating the average amount of of time it takes to complete common tasks, while cost savings derive a dollar value from these time savings based on average analytics engineering salaries across major global markets.
Interviews with data and analytics engineering leaders took place in September 2024.

Quickstart Your Paradime Journey Using dbt™

Onboard in seconds without credit card and start saving today!

Start for free