Senior Data Engineer
Palmetto Clean Technology
Palmetto Finance is a branch of Palmetto focused on the democratization of the renewable energy industry through financial products. We provide homeowners with financial products that help them benefit from solar power, home efficiency products and energy storage systems. We empower solar and storage companies with access to our proprietary platform, financing, customer management system and milestone submission system. Our #1 focus is a phenomenal experience for our customers and partners, evidenced in our growing financial product adoption. Palmetto Finance is growing rapidly and prepared to continue growth.
Summary of Role
As a Senior Data Engineer at Palmetto, you will be responsible for building, maintaining, and enhancing our data platform, optimizing existing infrastructure and contributing new components that enable the organization to scale. You will work closely with other data practitioners, engineers, and business stakeholders to develop the data strategy for the company and tools and processes to support it. You will own data quality and usability, and see every day the impact your work creates across the business.
This position also provides the opportunity to flex outside of the traditional data engineer role, contributing directly to analytics and machine learning projects, including product experimentation, predictive analytics, and MLOps.
- Design, develop, and maintain scalable and reliable data pipelines and ETL/ELT processes using tools such as Google Cloud, Looker, dbt, Fivetran, Stitch, Snowflake, and Snowplow.
- Contribute to the data model, enabling analysts, scientists, and business users to reliably and efficiently access data at scale.
- Implement quality control measures and proactively identify and address any data-related issues, ensuring data integrity and enhancing data trust across the organization.
- Continuously improve our performance, stack, and organization by staying updated with the latest industry tools and best practices.
- Understand evolving requirements of the business and expand the capabilities of the data platform accordingly, including building machine learning models and integrating pipelines into our data platform and downstream services.
- Bachelor's degree in Computer Science, Engineering, or a related field (or relevant experience)
- 6+ years experience in data engineering and/or software engineering
- Bonus: 2+ years experience in data science, data analytics, or machine learning
- Demonstrably deep understanding of SQL, NoSQL, and analytical data warehouses (Snowflake preferred)
- Strong understanding of data modeling, data warehousing, and data integration concepts.
- Familiarity with modern data tools and technologies, including Airflow, dbt, Fivetran, Stitch, and Looker.
- Experience using Python, Java, or Scala for API access and data processing. Bonus points for experience with distributed data processing (Spark, Storm, Flink, etc).
- Experience working with public clouds (GCP preferred) and Infrastructure as Code technologies (Terraform, Pulumi, CloudFormation, e.g.).
- Experience building machine learning pipelines and integrating them into data platforms.
- Strong written & spoken communication skills, with the ability to effectively communicate technical concepts to both technical and non-technical audiences.
- Strong opinions weakly held. You are knowledgeable in your domain and able to move quickly based on that knowledge, but humble enough to change your mind.
- Insatiable curiosity and love of learning
- Excellent problem-solving skills
- Flexible and excited to work in fast-paced, changing environment
If you are an experienced data professional with a passion for sustainability and the skills to contribute to our mission, we would love to hear from you. Apply now to become a part of our growing team!