As a data engineer, you will work on a small international team providing high-volume data collection and analysis technologies to support cyber threat intelligence for identifying and exposing information operations. You’ll help to design, develop, and maintain our backend to collect and make sense of research data, process information, and make it actionable. You will grow your skills, work with cutting-edge research and technologies and introduce new tools (R&D) to the team. You’ll work in an environment that encourages creative thinking and novel solutions to interesting problems. You will work closely with data engineers, data scientists, security researchers and intelligence analysts to build systems that enable the collection and analysis of data to develop tailored reporting. We constantly adapt to a changing target landscape to maintain access to information.
Bottom line: you’ll create scalable data pipelines that makes our team smarter, faster, and better at what we do – protecting the world from evil.
You will work on the Information Operations (IO) intelligence analysis team, while partnering with a small multidisciplinary group to design and implement some of our most critical research and collection projects. You will provide innovative, pragmatic solutions to technical problems based upon data collections and processing. You will be self-driven to learn about the security challenges we seek to address and work with members from different teams to design solutions to collect and process data to identify and expose information operations.
- Collaborate with IO team to architect, develop, and manage solutions for the collection and analysis of high volumes of data.
- Help design and build our collection pipelines to enable the development of solutions that support service deliverables and help our analysts work more efficiently
- Increase actionability of threat intelligence reporting by helping us develop infrastructure to analyze and process large amounts of data
- Engage in architecture sessions, challenge existing solutions, and inspire ideas for future enhancements
- Build new data collection and analysis systems
- Maintain and improve code base of existing projects
- Write requirements and implementation documentation
- Ability to multi-task
- You have the ability and drive to help design data systems for large-scale data query and aggregation based on analysis requirements
- You are self-driven to learn about different resources and how to use them
- You can help shape technical decisions in collaboration with the team
- You can collaborate with other team members to ensure successful product creation
- You have the ability to pick up new tools and technologies
- You have the ability to work as a member of a small global team in a fast-paced environment
- You can communicate complex technical ideas to other team members
- Experience or interest in architecting data systems for large-scale data query and aggregation
- Experience with different programming languages and willingness to learn new ones depending on the requirements – experience with Python strongly preferred
- Experience with relational databases, such as PostgreSQL or MySQL
- Experience using Search Indices and Elastic Search and dealing with Big Data
- Cloud development experience (AWS, GCP or Azure)
- Experience building data pipelines (e.g. Airflow, NiFi)
- Distributed processing of large datasets (e.g. Spark, Presto, Athena, BigQuery)
- Streaming frameworks (e.g. Kafka, Kinesis)
- Graph databases (e.g. Neo4j, JanusGraph)
- NoSQL databases (e.g. MongoDB, DynamoDB, Cassandra)
- Citizenship: Not Provided
- Incentives: Not Provided
- Education: Not Provided
- Travel: Not Provided
- Telework: Not Provided