Back to Jobs

Data Architect/Engineer

Remote, USA Full-time Posted 2025-05-21

MatchPoint Solutions is a fast-growing, young, energetic global IT-Engineering services company with clients across the US. We provide technology solutions to various clients like Uber, Robinhood, Netflix, Airbnb, Google, Sephora, and more! More recently, we have expanded to working internationally in Canada, China, Ireland, UK, Brazil, and India. Through our culture of innovation, we inspire, build, and deliver business results, from idea to outcome. We keep our clients on the cutting edge of the latest technologies and provide solutions by using industry-specific best practices and expertise.We are excited to be continuously expanding our team. If you are interested in this position, please send over your updated resume. We look forward to hearing from you!Job DescriptionTitle : Data Architect/EngineerLocation : RemoteDuration : 6+ MonthsRate $100 to $110/hr on w2Overview: We are seeking a skilled and versatile Data Architect / Data Engineer to design, build, and optimize data platforms and pipelines within a distributed environment. The ideal candidate will possess deep expertise in managing large-scale data systems, data integration, modern data engineering practices, and pipeline orchestration. You will play a key role in architecting and engineering scalable, high-performance data solutions that drive business insights and innovation. Key Responsibilities:
  • Design, implement, and manage scalable data architectures on distributed platforms (e.g., MapR, HPE Unified Analytics and Data Fabric).
  • Develop, optimize, and maintain robust data pipelines using tools such as Spark, Airflow, and EzPresto.
  • Configure and maintain Kafka architecture, MapR Streams, and related technologies to support real-time and batch data processing.
  • Implement Change Data Capture (CDC) mechanisms and integrate data using APIs and streaming techniques.
  • Monitor, tune, and troubleshoot distributed data clusters including MapR and Kubernetes environments.
  • Develop and maintain CI/CD pipelines using Jenkins and integrate with GitHub for automated testing and deployment.
  • Collaborate with cross-functional teams to ensure data quality, governance, and compliance standards are met.
  • Leverage tools such as Iceberg and Superset for data storage optimization and visualization. Required Skills & Experience:
  • Strong experience with distributed data platforms, including MapR and Kubernetes.
  • Proficient in data pipeline tools and frameworks: Spark, Airflow, EzPresto.
  • Solid programming and scripting skills: Python, Bash.
  • Expertise in Kafka architecture and operations.
  • Experience with CI/CD development workflows using Jenkins and GitHub.
  • Knowledge and use of Apache Iceberg for data lake management.
  • Familiarity with data architecture best practices, including CDC and API-based integrations. Preferred / Nice to Have:
  • Experience with HPE Unified Analytics and Data Fabric.
  • Familiarity with MapR Streams and Superset for real-time analytics and dashboarding


Apply for the job now!

Apply for this job

 

Similar Jobs