Dev Ops Engineer, Manager, Finance Analytics
If you’re looking for a meaningful career, you’ll find it here at Webster. Founded in 1935, our focus has always been to put people first--doing whatever we can to help individuals, families, businesses and our colleagues achieve their financial goals. As a leading commercial bank, we remain passionate about serving our clients and supporting our communities. Integrity, Collaboration, Accountability, Agility, Respect, Excellence are Webster’s values, these set us apart as a bank and as an employer.
Come join our team where you can expand your career potential, benefit from our robust development opportunities, and enjoy meaningful work!
Primary responsibilities:
- Develop expert knowledge and experience with Webster’s data systems and tools.
- Design, deploy, and maintain serverless infrastructure and model pipelines. Designing, building, and maintaining infrastructure.
- Execute and support CECL Quarterly Production Process and Annual Refresh.
- Build, automate, and monitor statistical and machine learning model workflows from development to production.
- Analyze and organize systems and datasets to derive actionable insights and create efficient and low maintenance pipelines.
- Develop data workflows to support data ingestion, wrangling, transformation, reporting and dashboarding.
- Build and manage CI/CD pipelines to ensure reliable, secure, and repeatable deployments.
- Collaborate across teams to analyze requirements and propose infrastructure or pipeline solutions.
- Use Snowflake for data access and processing, including creating robust data pipelines and integrations.
- Manage data science notebooks in production environments (e.g., SageMaker Studio,JupyterHub).
- Use Git for version control and workflow management across codebases and projects.
- Collaborate with cross-functional teams to understand data requirements and implement effective solutions.
Key Skills/Experience:
- 5+ years of experience working in data engineering and/or DevOps specializing in AI and Machine Learning deployment.
- Experience working with complex data structures within a RDMS (Oracle, SQL).
- Experience in core programming languages and data science packages (Python, Keras, Tensorflow, PyTorch, Pandas, Scikit-learn, Jupyter, etc.)
- Experience working with complex data structures within a RDMS (Oracle, SQL).
- Proficient in Python/SAS Programming Language.
- Experience with traditional ML and deep learning techniques (CNNs, RNNs, LSTMs, GANs), model tuning, and validation of developed algorithms.
- Familiarity with commercial & consumer banking products, operations, and processes, or risk & finance background/experience.
- 5+ years of experience leveraging cloud services and capabilities of computing platforms (e.g., AWS SageMaker, S3, EC2, Redshift, Athena, Glue, Lambda, etc. or Azure/GCP equivalent).
- Experience in Reporting and Dashboarding tools (e.g.- Tableau, Qlik Sense).
- Extensive experience with design, coding, and testing patterns as well as engineering software platforms and large-scale data infrastructures.
- Experience in DevOps and leveraging CI/CD services: Airflow, GitLab, Terraform, Jenkins, etc.
- Experience with Data Science project implementation.
- Experience in documenting processes, scripts, memos clearly for internal knowledge sharing and audits
- Strong analytical and problem-solving skills and ability to work in a collaborative team environment.
- Excellent communication skills to convey complex technical concepts to non-technical stakeholders.
- Ingenuity, analytical thinking, resourceful, persistent, pragmatic, motivated and socially intelligent.
- Time management skills are needed to prioritize multiple tasks.
Desired Attributes:
- Familiarity with Docker and Kubernetes for containerized deployments.
- Experience with Terraform, AWS CDK, or other infrastructure-as-code tools.
- Knowledge of ETL/ELT pipelines and orchestration tools.
- Understanding of monitoring/logging best practices in cloud-based environments.
- Familiarity with the SAS programming language.
- Experience using Confluence for documentation and collaboration.
- Knowledge of Tidal or other workflow automation and scheduling tools.
- Experience in developing constructive relationships with a wide range of different stakeholders.
- Experience in developing Machine Learning and Deep Learning models.
- Ability to independently gather data from various sources and conduct research.
- Ability to think “out of the box” and provide suggestions on ways to improve the process.
Education:
- Bachelors, Masters’ or Ph.D. degree in computer science, data science or other STEM fields (e.g., physics, math, engineering, etc.) Other degrees with a strong computer science and/or data science background also acceptable.
The estimated base salary range for this position is $110,000 USD to $125,000 USD. Actual salary may vary up or down depending on job-related factors which may include knowledge, skills, experience, and location. In addition, this position is eligible for incentive compensation.
#L1-JS1
All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status.