Description: *W2 Applicants Only - Cannot Support C2C*
A software engineer should embrace the challenge of dealing with petabytes or even exabytes of data daily in a high-throughput API/microservice ecosystem. A software engineer understands how to apply technologies to solve big data problems and to develop innovative big data solutions. The software engineer generally works on implementing complex projects with a focus on collecting, parsing, managing, analyzing, and making available large sets of data to turn information into insights using multiple platforms. The software engineer should be able to develop prototypes and proof of concepts for the selected solutions.
• Design, build and support of cloud and open source systems to process geospatial data assets via an API-based platform
• Partners with other internal development communities to bring needed data sets into the asset and making data available to the Bayer Enterprise and internal development communities
• Building highly scalable API’s and associative architecture to support thousands of requests per second
• Provides leadership in advancing Bayer’s understanding of environmental/external influences on field performance and risk factors
• Working at all stages of the software life cycle: Proof of Concept, MVP, Production, and Deprecation
• Defines and promotes the data warehousing design principles and best practices with regards to architecture and techniques within a fast-paced and complex business environment.
• BSc degree in Computer Science or relevant job experience.
• Minimum of 2-year experience with Python, Java, Go, or similar development languages.
• Experience developing HTTP APIs that serve up data in a cloud environment.
• Ability to build and maintain workflows and applications in modern cloud architecture, e.g. AWS, Google Cloud, etc.
• Proven experience working with ETL concepts of data integration, consolidation, enrichment, and aggregation. Design, build and support stable, scalable data pipelines or ETL processes that cleanse, structure and integrate big data sets from multiple data sources and provision to integrated systems and Business Intelligence reporting.
• Experience working with PostgreSQL/PostGIS.
• Proven success utilizing Docker to build and deploy within a CI/CD Environment, e.g. Argo.
• Experience with code versioning and dependency management systems such as GitHub, SVT, and Maven.
• Proficient working in a Command Line Interface system e.g. Docker, K8s, AWS CLI, GCloud, Argo, psql, SSH
• MSc in Computer Science or related field.
• Demonstrated knowledge of open-source geospatial solutions like GeoServer, GeoTrellis, GeoMesa.
• Proven experience (2 years) with distributed systems, e.g. Kubernetes, Spark, distributed databases, grid computing.
• Experience with stream processing, e.g. Kafka.
• Proven experience (2 years) with GoLang
• Experience developing schema data models in a data warehouse environment.
• Experience working with customers/other developers to deliver full-stack development solutions e.g collect software, data, and timeline requirements in an Agile environment.
• Demonstrated knowledge of agriculture and/or agriculture-oriented businesses.
• Experience implementing complex data projects with a focus on collecting, parsing, managing, and delivery of large sets of data to turn information into insights using multiple platforms.
• Demonstrated experience adapting to new technologies.
• Capable to decide on the needed hardware and software design needs and act according to the decisions. Ability to develop prototypes and proof of concepts for the selected solutions.
• Experience with object-oriented design, coding, and testing patterns as well as experience in engineering (commercial or open source) software platforms and large-scale data infrastructures should be present.
• Experience creating cloud computing solutions and web applications leveraging public and private API’s.
Contact: [Click Here to Email Your Resumé]