We are Connecterra:
We innovate for purpose. We are building AI that will impact the future of our planet.
By 2050, the world will have over 9bn people; this requires a 60% increase in food production. With an average age of 59* (in the US), farmers require the help of technology to help them scale. With over 500m farmers in the world and a billion people working in agriculture, now is the time that technology needs to be built to feed the world.
As a full-stack technology company, we engineer hardware sensors and a machine learning platform that trains and operates an AI service in 9 territories. Ida is a farmers’ assistant that can run a dairy farm 30% more efficiently than a human farmer. Ida learns the behavior of farmers and dairy cows and provides guidance on how to run a better farm. Join us and help us build an AI that has purpose. Learn more about us here or check out our press coverage here and here:
The Connecterra Engineering Team is at the core of our company. We keep all systems running and build powerful housing for our AI to grow and learn. Most importantly, we are a key component in the company’s strategic development.
As our Data engineer, you will support our software developers and Data scientists on Data initiatives and will ensure the consistency of the data delivery architecture throughout the ongoing projects.
If you are a fast learner who thrives in challenging environments and has a creative yet pragmatic approach to problem-solving, read on!
- A master's degree in Computer science or software engineering, or proven working experience with large volumes of data
- Can design and implement a scalable real-time data processing pipeline using open source projects
- Can maintain a codebase with multiple contributors and manage the release processes
- Develop solutions for managing structured (e.g. SQL Server, PostGres etc.), and unstructured / NOSQL (i.e. Hadoop, HBASE, etc.)
- Proficient in Python, Java or Scala
- Proficient in build tools such as Maven or SBT
- Experience/working knowledge of open source big data solutions such as Spark, Flink, Beam, HBase, Kafka, Cassandra, Nifi, etc.
- Willingness to work across a diverse set of technologies, and ability to ramp up on new technologies quickly
- Excellent written and verbal communication skills
- Basic understandings of functional programming and property-based testing
- Good understandings of relational databases
- Knowledge of Azure, Google Cloud, or AWS
- Experience with build / test automation
We have an eclectic mix of technologies and languages in our stack, but we are working on converging towards a unified stack for easier maintenance. Our tech stack spans Microsoft .Net, Python, Java, Angular, React and a host of open source platforms. You will have a say in setting the direction of evolving our technology stack, just come prepared with solid arguments.
- Develop and design data pipelines, using Open Source tools. Your solution must be highly available, scalable, and fault-tolerant
- Support the data science team in running machine learning models in real time
- Contribute to the what (story review), why (roadmap), and how (detailed design and architecture), when (work estimation, deadlines etc.) we build the product plans
- Work well in a team, fostering an environment of collaboration and innovation
- Provide 3rd Line Support (identify, troubleshoot, fix and workaround) for applications and services – you own your code!
- Assist with environment build deployments, release notes, and build notices
- The opportunity to play a vital role in an exciting start-up
- A competitive package
- Flexible working hours and vacation policy
- Fantastic office space in Amsterdam
- Very open and creative working environment
- A fridge filled with special kinds of beer and drinks
- Catered lunches, tons of snacks and fruit