Job posting has expired
Data Engineer
![]() | |
![]() United States, Georgia, Atlanta | |
![]() 80 Jesse Hill Junior Drive Southeast (Show on map) | |
![]() | |
Grady Health System offers many career paths for experienced professionals. Whether you have many years of experience or are in the early stages on your career, you can find a rewarding career at Grady! The Data Engineer is responsible for designing, developing, and maintaining the infrastructure and systems required for data storage, processing, and analysis. They play a crucial role in building and managing the data pipelines that enable efficient and reliable data integration, transformation, and delivery for all data users across the enterprise. The data engineer also is responsible for the creation of BI solutions designed to gain insights, monitor key organizational and operational measures, and provide visibility throughout the organization and to our customers of system performance. - Designs and develops data pipelines that extract data from various sources, transform it into the desired format, and load it into the appropriate data storage systems. - Integrates data from different sources, including databases, data warehouses, APIs, and external systems. - Analyze, design, develop, and document BI solutions based on Information Services standards and best practices. - Coordinate with the team to build and share knowledge, ensuring consistent delivery of information. - Analyze, diagnose, and resolve reporting, ETL, and data issues. - Ensures data consistency and integrity during the integration process, performing data validation and cleaning as needed. - Transforms raw data into a usable format by applying data cleansing, aggregation, filtering, and enrichment techniques. - Works to optimizes data pipelines and data processing workflows for performance, scalability, and efficiency. - Monitors and tunes data systems, identifies and resolves performance bottlenecks, and implements caching and indexing strategies to enhance query performance. - Implements data quality checks and validations within data pipelines to ensure the accuracy, consistency, and completeness of data. Required:
Preferred: Master's degree in Information Systems, Business Intelligence Analytics, or similar field. Knowledge of Apache technologies such as Kafka, Airflow, and Spark to build scalable and efficient data pipelines. Experience in programming languages such as Java, Python, and C/C++ Epic Caboodle Developer Certification Equal Opportunity Employer-Minorities/Females/Veterans/Individuals with Disabilities/Sexual Orientation/Gender Identity. |