YOUR CHALLENGE
As a Data Engineer, you will be responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross functional teams. You will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects.
TECHNOLOGIES WE USE
If you join KNEIP, you will work on the following technologies:
  • Big Data tools such as Hadoop, Spark, NiFi, Kafka, Hive, Sqoop, Storm etc
  • Relational SQL and NoSQL databases, including Postgres and hbase
  • Cloud services such as Azure/AWS Object-oriented/functional languages: Python, Java, Scala etc
YOU GET
As a member of the KNEIP team you will be working in a dynamic, multi-lingual and multi-cultural environment. We offer you an attractive salary package, a number of benefits and a flexible work-time schedule. We attach great importance to the personal fulfilment and professional development of our employees. We offer possibilities for outside trainings and access to learning platforms (e.g. CodeSchool, Pluralsight).
WHO YOU ARE
You have the following characteristics, knowledge, skills to do the job at a high level of accomplishment:
  • You have at least 3+ years of experience in a Data Engineer role
  • You have a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field
  • You have experience building and optimizing ‘big data’ data pipelines, architectures and data sets within the Hadoop ecosystem
  • You have advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases
  • You have strong analytic skills related to working with unstructured datasets
  • You are able to build processes supporting data transformation, data structures, metadata, dependency, data governance and workload management
  • You have a successful history of manipulating, processing and extracting value from large disconnected datasets
  • You have a working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores
  • You have the ability to self-organise and manage own workload and priorities
INTERESTED?
If you wish to join our team, please click on the "Apply" button below to send your application letter and Curriculum Vitae. All applications will be treated in the strictest confidence. During the recruitment process candidates may be asked a criminal record extract for background screening purposes.
Apply now