Professional Experience
June 6, 2024
I am a Software/Data Engineer with 5+ years of experience and I have a passion for developing and utilizing cutting edge technologies related to data processing and distributed computing. I enjoy contributing to open source projects, and learning new things about software and the technology industry.
Senior Data Engineer | Capital One
November 2021 - Present
At Capital One I work(ed) in the Software LOB where I was a member of multiple teams working to build a SaaS application called Slingshot. This was one of Capital One’s first commercialized software product and its goal is to help customers better understand and optimize their Snowflake spend. I had a part in building and supporting features related to cost saving recommendations, which provided cutomers with an in app experience to view and apply in attempt to optimize the cost / performance utilization of Snowflake resources.
- Optimized ETL process for producing aggregated metrics which reduced execution time by 99% on 500 million record batches and reduced daily running cost by 50%.
- Enhanced developer experience of multiple code bases by introducing SQL and JavaScript linting, formatting
- Designed and implemented a DAG scheduler in Snowflake using SQL and JavaScript to support dozens of tenants responsible for generating more than 200 cost recommendations monthly.
- Produced QuickSight dashboarding experience for customers resulting in 3 high value sales conversions.
- Introduced CI/CD for Snowflake databases for 4 teams which removed manual deployment efforts and parallelized testing stages to reduce pipeline execution time by 66%.
- Created and distributed an internal Python library for generating datasets in code, which powered test suites resulting in a 82% reduction of manual QA testing.
- Mentored developers in SQL, Python, and API development through pair programming sessions and code reviews.
Tools and Tech: Java, SQL, Python, JavaScript, Rust, GRPC, REST, AWS, Snowflake, Github
Data Engineer | Hexagon PPM
August 2019 - November 2021
- Implemented ETL pipelines using Spark and Delta Lake for ingesting millions of telemetry messages per hour which provided up-to-date reporting for internal users and 4500 customers.
- Developed internal Python packages, reducing code duplication and introducing automated testing.
- Performed data lake migrations across Azure regions which reduced monthly expenses by 14%.
Tools and Tech: SQL, Python, Apache Spark, Delta Lake, Azure, Databricks
Education
University of Alabama in Huntsville
Master of Science in Computer Science - December 2021
Bachelor of Science in Computer Science - May 2020
Certifications
AWS Certified Solutions Architect Associate - June 2022
Snowflake SnowPro Core - Feb 2023