Job Description
ABOUT ROCKET MONEY đź”®
Rocket Money’s mission is to meaningfully improve the financial prosperity of millions of people. Rocket Money offers members a unique understanding of their finances and a suite of valuable services that save them time and money – ultimately giving them a leg up on their financial journey.
ABOUT THE TEAM 🤝
Data Platform Engineers at Rocket Money further our mission by building and maintaining the infrastructure that enables our company to understand our users and products through reliable, scalable data systems. We build the foundational platform that ingests, processes, and serves data—enabling our Data Analytics, Machine Learning, and Software Engineering teams to build data products efficiently. We work closely with engineering teams to ensure data is captured correctly at the source, design resilient pipelines, and create self-service capabilities that empower stakeholders.
The Data Platform team is in an exciting phase of growth and formalization. While we have established tools and processes in place (dbt, BigQuery, Terraform), we're actively building the comprehensive standards, practices, and systems that will scale with Rocket Money as it grows. We're looking for engineers who can take ownership and evolve workflows into reliable, well-documented, production-grade systems. This is an opportunity to shape architectural decisions, establish team practices, and define how Rocket Money works with data for years to come.
We have a strong preference for process-oriented systems thinkers who excel in balancing stakeholder requirements, technical debt, and organizational complexity—and who thrive in environments where they need to create structure from ambiguity. You'll need to be comfortable making principled decisions, taking ownership not just of code but of outcomes, and operating with significant autonomy as we build the platform together.
ABOUT THE ROLE 🤹
In this role, you will:
- Be an end-to-end owner of our data platform infrastructure, ensuring its security, usability, and performance. Work closely with analytics engineers, machine learning engineers, and software engineers to ensure the platform meets their needs.
- Make collaborative decisions about data tooling, pipeline design, and governance, and implement opinionated interfaces that facilitate easy and best-practice-aligned development for other teammates.
- Continuously improve failure rates for data sources by moving alerting “to the left”; i.e. catching and quarantining bugs/failures as close to the source as possible.
- Analyze patterns in how source data is generated, modeled, and consumed. Work with stakeholders to implement the best solution when new data sources are added.
- Take ownership of existing tools and workflows that may be functional but lack formal structure, documentation, or reliability measures. You'll assess what's working, identify gaps, and systematically improve systems to production-grade quality.
- Document everything you build knowing that you're creating the foundation others will build upon. Your runbooks, architecture decision records, and system documentation will become the institutional knowledge of our data platform.
- Proactively communicate with multiple stakeholders on platform capabilities, technical constraints, architectural decisions, project priorities, and platform support.
- Confidently juggle multiple projects and priorities in our fast-paced environment and work with stakeholders and platform teammates to ensure infrastructure changes, migrations, and improvements are delivered on schedule.
- Automate aggressively and deliberately; anything from GitHub Actions to Slack Workflows to minimize repetitive tasks. Effectively judge when the level of effort is appropriate and avoid over-engineering.
ABOUT YOU 🦄
- You have 6+ years of experience working with data infrastructure, data engineering, or platform engineering within a fast-paced environment. You are highly proficient with SQL, Python, and cloud-based Infrastructure-as-Code (e.g. Terraform), and comfortable working with bash/shell scripting.
- You have 4+ years of production experience with modern data stacks including data warehouses (BigQuery, Snowflake, or Redshift), orchestration tools, managed ingestion services, and infrastructure as code (Terraform, Pulumi, or CloudFormation).
- You have 2+ years of experience building and maintaining production data pipelines, whether through ELT tools, custom applications, streaming systems, or event-driven architectures.
- You've successfully "professionalized" data infrastructure before—taking scrappy, working systems and evolving them into reliable, well-documented, production-grade platforms. You can articulate what "production-ready" means in a data context.
- You have a bias toward action and aren't paralyzed by imperfect solutions. You understand when "good enough for now with a plan to improve" beats "perfect but six months late." You ship incrementally and iterate based on feedback.
- You're comfortable being the first person to tackle a problem. You don't need extensive mentorship or detailed tickets—you can take a high-level business need and figure out the technical approach. That said, you know when to ask for help and can articulate what you need.
- You take ownership seriously—not just of writing code, but of outcomes. When you build something, you implement monitoring, write runbooks, create alerts, and ensure it can be maintained by others. You think about the full lifecycle of systems, not just initial delivery.
- You have strong opinions, weakly held. You can make and defend architectural decisions, but you're open to feedback and willing to change course when presented with better information. You can disagree and commit.
- You understand that "building from scratch" doesn't mean rejecting existing tools—it means thoughtfully selecting, configuring, and integrating managed services and open-source solutions to create a cohesive platform. You know when to build and when to buy.
- You have experience making big changes to critical data infrastructure. You’ve successfully re-architected, migrated, or upgraded data tooling that has strict SLA’s, without significantly affecting downstream stakeholders.
Bonus points if:
- You have led a data infrastructure migration or modernization project where you defined the vision, approach, and implementation.
- You have created internal tools, frameworks, or CLIs that improved how teams work with data (not just one-off scripts).
- You have established data platform best practices like CI/CD workflows, testing frameworks, or observability standards where none existed.
- Expertise in cloud platforms and technologies analogous to our stack:
- Our stack: GCP (BigQuery, Datastream, Cloud Functions, Vertex AI, GCS), dbt, Fivetran, Postgres, Python, Terraform, Looker, Retool
- Analogous experience: AWS (Redshift, DMS, Lambda, SageMaker, S3) or Azure (Synapse, Data Factory, Functions), Snowflake, Airbyte/Stitch, infrastructure as code tools, BI platforms
WE OFFER đź’«
- Health, Dental & Vision Plans
- Life Insurance
- Long/Short Term Disability
- Competitive Pay
- 401k Matching
- Team Member Stock Purchasing Program (TMSPP)
- Learning & Development Opportunities
- Tuition Reimbursement
- Unlimited PTO
- Daily Lunch, Snacks & Coffee (in-office only)
- Commuter benefits (in-office only)
Additional information: Salary range of $160,000 - $200,000 + bonus + benefits. Base pay offered may vary depending on job-related knowledge, skills, and experience.
Rocket Money, Inc. is an Affirmative Action and Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, or protected veteran status and will not be discriminated against on the basis of disability.
Pursuant to the San Francisco Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records.

Rocket Money
5 jobs posted
About the job
Feb 3, 2026
Mar 5, 2026
Similar Jobs
Otter
20 days agoSenior Data Engineer, Data Platform
Mountain View, CA$185K - $230K/yrView detailsNVIDIA
25 days agoSenior Data Engineer, Ops Data Platform
US, CA, Santa Clara$168K - $270K/yrView detailsNVIDIA
25 days agoSenior Data Engineer, Ops Data Platform
US, CA, Santa Clara$168K - $270K/yrView detailsMastercard
29 days agoSenior Data Engineer
Pune, IndiaView detailsTruecaller
27 days agoSenior Data Engineer
BangaloreView detailsVisa
27 days agoSenior Data Engineer
Bengaluru, INDIA, INView detailsTruecaller
24 days agoSenior Data Engineer
BangaloreView detailsEpic Games
20 days agoSenior Data Engineer
London,England,United KingdomView detailsEpic Games
20 days agoSenior Data Engineer
Cary,North Carolina,United StatesView detailsMastercard
21 days agoSenior Data Engineer
Toronto, Canada$111K - $160K/yrView details
Looking for something different?
Browse all AI jobs