Most popular jobs

1310Jobs Found

1310 Jobs Found 

B
B

Data Engineer

Benefits Data Trust

Philadelphia, PA
12 days ago
Philadelphia, PA
12 days ago

Data Engineer

Benefits Data Trust (BDT) seeks a capable Data Engineer to help develop our new Google Cloud data platform and run our current data infrastructure. BDT embraces Data4Good and is a fast-growing, digitally enabled non-profit that improves benefits access for thousands of underserved people.

As part of our Data Engineering team in our Data Science department, the Data Engineer will:

  • Build out our new GCP data platform and collaborate on architectural patterns for it with the Data Engineering team
  • Support the development of machine learning models with productionizing, monitoring and alerting tools
  • Write, update, and maintain ETL jobs across our data pipelines (mostly in Airflow)
  • Implement continuous improvements using our existing tools/technologies, which include SQL, Airflow, Python, Docker/Kubernetes, and others such as Terraform and Apache Beam. May also be expected to research and select other tools when the situation demands
  • Collaborate with internal customers to identify ongoing platform improvements (teams including Analytics, Projects, Policy, Software Engineering, and others throughout the organization)
  • Consult to software engineers on data-related changes to BDT’s suite of software applications, including schema/model design, table structure, and data collection
  • Engage with colleagues and collaborators using curiosity, critical thinking, a drive to completion, empathy, and a focus on impact
  • Follow existing data access and performance design standards for the data platform, software engineering, and all products and services accessing BDT information

Based in our Center City Philadelphia office, the Data Engineer reports to the Director of Data Science.

Due to COVID-19, BDT is operating under a remote-working protocol, following governmental requirements and CDC guidelines.

Successful candidates will demonstrate the following through their experience (typically 3-5 years as a Data Engineer):

  • Communication and Relationship-building – with technical peers and some stakeholders
  • Cloud-based Solution Implementation, of data platforms and infrastructure, including event-driven architectures, microservices and pattern design, supporting compliance and regulated environments (including PII and PHI)
  • Workflow and pipeline development to ensure reliability, availability, and consistency
  • Systems Engineering – on-system service management, typically in *nix environments
  • Data Modeling and Warehousing – proficient understanding of relational data structures and schemas; some familiarity with semi-structured, unstructured (big data) schemas
  • Automation, monitoring, and alerting – creating these tools based on existing designs and frameworks; resolving bugs and issues
  • Cloud engineering – working towards certification on any of the major hyperscale cloud platforms
  • Data Encapsulation & Transfer methodologies – understands standards for file formats and transfer methods
  • Also interested in relevant experience including:
  • o Experience with BI implementations/uplifts (we currently use Looker) and/or Data Governance models and methods
  • o Machine learning techniques, productionizing machine learning models, and/or creating models

About BDT

Benefits Data Trust (BDT) is a national nonprofit that helps people live healthier, more independent lives by creating smarter ways to access essential benefits and services. Each year, BDT helps tens of thousands of people receive critical supports using data, technology, targeted outreach, and policy change. Since its inception in 2005, BDT has screened more than one million households and secured over $7 billion in benefits and services. BDT employs more than 200 people and provides enrollment assistance to individuals in six states, and policy assistance to states nationwide. For more information, visit bdtrust.org.

0
0

Contract REMOTE Senior Data Engineer/ SQL SSIS/MAS500/SAP

Fort Washington, PA
30 days ago
Fort Washington, PA
30 days ago

They would be working with SAP, Mas500, and OMS to handle data extraction, manipulation, validation and EDI communication with the platform. 

Can do C2C
The client is accepting GC/CITIZENS 

Position: Senior Data Engineer/ SQL SSIS

Location: Remote

Tasks Description:

  • Create and maintain SQL scripts and packages using SSIS
  • Generating csv and txt files
  • Design develop and implement Database schema, Tables and Views.
  • Design, develop and support processing to transform and load data into the data warehouse
  • Create complex scripts to align with defined business requirements.

 

Qualifications & Requirements:

  • SSIS expertise is a must (7+ years)
  • Deep SQL Server background (must have 2008 R2 through 2017)
  • Proficiency in the development of complex technical solutions and scripts using SSIS/ SQL Server
  • Strong T-SQL development skills
  • Experience troubleshooting failure issues. check error logging, checkpoints, auditing and deployment packages
  • Knowledge of data marts / data base structures
  • Experience with EDI and Seeburger is a plus
  • Experience with data extraction from SAP systems is a plus

 

 

D
D

Big Data Engineer

Diverse Lynx

Plymouth-Meeting, PA
12 days ago
Plymouth-Meeting, PA
12 days ago
Position : Big Data Engineer
Client : Hexaware
Location : Plymouth-Meeting, PA
Full-Time/Contract
Job Description:
Scala: map, flatMap, foldLeft, foldRight, scala options, case classes, scopt, scala unit test cases, higher order functions, generics, traits
Spark: Dataframe, Datasets and their APIs, (withColumn, windowing function, case when etc), analyzing spark DAGs and tuning performance issues, join optimizations, good knowledge of spark internals, spark launcher
Python: Basic scripting with concepts like list comprehension, flow control statements, functions, classes, file I/O, exception handling etc
Akka: good understanding of akka infrastructure, schedule messages, actor lifecycle, supervisor strategies for fault-tolerance, routers, mailboxes etc.
experience working with different file formats - parquet, orc
proficiency in SQL is must
Not necessary but good to have:
  1. Springboot
  2. devops and deployment life cycle

Diverse Lynx LLC is an Equal Employment Opportunity employer. All qualified applicants will receive due consideration for employment without any discrimination. All applicants will be evaluated solely on the basis of their ability, competence and their proven capability to perform the functions outlined in the corresponding role. We promote and support a diverse workforce across all levels in the company.
S
S

Senior Data Engineer

Splunk

Philadelphia, PA
2 days ago
Philadelphia, PA
2 days ago
Ready to shake things up? Join us as we pursue our disruptive new vision to make machine data accessible, usable and valuable to everyone. We are a company filled with people who are passionate about our product and strive to deliver the best experience for our customers. At Splunk, we’re committed to our work, customers, having fun, and most significantly to each other’s success. We continue to be on a tear while enjoying incredible growth year over year.
As the Senior Data Engineer within the Data Technologies and engineering organization, you will own and be part of business analytics projects from Inception through Hypercare. As a data and reporting engineer you will drive transformational data and analytics initiatives for improving the effectiveness and efficiency for Enterprise business functions. Having the right blend of technological depth, Sales, Marketing, Finance process and systems expertise along with analytics skills is key to the success of the role. You will be expected to manage end to end delivery of small to medium scale projects in the data & analytics area through active leadership and partner management, both at strategic and tactical levels. Very Strong presentation skills and executive presence, with effective communication to create impact and influence both at the executive and mid-level management.
What you'll do: Yeah, I want to and can do that.
+ Deep understanding of Cloud Data Warehouse methodologies, Data Architecture, Data Modeling, and metadata.
+ Ability to support the creation of single source of truth for Business Metrics leveraging agreed upon data definitions, create a common data foundation, and semantic layers to be consumed by business.
+ Deliver highly available, reliable, innovative large-scale data warehousing solutions to facilitate data ingestion, build optimized aggregates and building reporting solutions.
+ Deep knowledge and hands-on experience working with Cloud database technologies like Redshift, Snowflake and multiple BI platforms such as Tableau, Splunk and scripting language like Python.
+ Solid grasp of operational processes, systems and data in multiple areas of Sales & operations; Forecasting, Quote to Cash, Pipeline management.
+ Understanding of SaaS business applications and processes as in Salesforce, SAP, Eloqua, Anaplan, etc.
+ Identify data patterns, attribute hierarchies and data relationships and organize into data dictionaries, create standardized definitions for metrics and critical metrics.
+ Analytical and problem-solving experience, exposure to large-scale systems and some experience writing code.
+ Strong business analysis skills - capturing and documenting requirements, understanding business impacts and tradeoffs, conducting interviews and workshops, proposing solutions, documentation and training, etc.
Requirements: I’ve already done that or have that!
+ 8+ years of experience as a Data Warehouse Analyst, Data Engineer and/or Data Scientist
+ Savviness with complex SQL queries and knowledge of database technologies including window and analytical functions
+ Experience with Python analytic libraries and Business Intelligence tools such as Tableau.
+ An ability to provide technical guidance, direction and problem solving to data engineering team members.
+ Confidence to offer consultation to business partners and team members within Sales, Partner, Marketing Operations.
+ A familiarity working with an AGILE/SCRUM process management.
Preferred knowledge and experience: These are a huge plus.
+ Knowledge of Splunk products
+ Agile certifications
Education: Got it!
Bachelor’s degree preferably in Computer Science, Information Technology, Management Information Systems, or equivalent years of industry experience.
What We Offer You: Wow, I want that.
+ A constant stream of new things for you to learn. We're always expanding into new areas, bringing in open source projects and contributing back, and exploring new technologies.
+ A set of extraordinarily hardworking, innovative, open, fun and dedicated peers, all the way from engineering and QA to product management and customer support.
+ Growth and mentorship. We believe in growing engineers through ownership and leadership opportunities. We also believe mentors help both sides of the equation.
+ A stable, collaborative and supportive work environment.
We don't expect people to work 12 hour days. We want you to have a successful time outside of work too. Want to work from home sometimes? No problem. We trust our colleagues to be responsible with their time and dedication and believe that balance helps cultivate an extraordinary environment
This isn’t a job – it’s a life changer – are you ready?
Splunk has been named one of San Francisco Bay Area’s “Best Places to Work” by the San Francisco Business Times, ten years in a row. We offer a highly competitive compensation package and a plethora of benefits.
Splunk is proud to be an equal opportunity workplace and is an affirmative action employer. We value diversity at our company. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, or any other applicable legally protected characteristics in the location in which the candidate is applying. For job positions in San Francisco, CA, and other locations where required, we will consider for employment qualified applicants with arrest and conviction records.
About Splunk
Splunk was founded to pursue a disruptive new vision: make machine data accessible, usable and valuable to everyone. Machine data is one of the fastest growing and most complex areas of big data—generated by every component of IT infrastructures, applications, mobile phone location data, website clickstreams, social data, sensors, RFID and much more.
Splunk is focused specifically on the challenges and opportunity of taking massive amounts of machine data, and providing powerful insights from that data. IT insights. Security insights. Business insights. It’s what we call Operational Intelligence.
Since shipping its software in 2006, Splunk now has over 13,000 customers in more than 110 countries around the world. These organizations are using Splunk to harness the power of their machine data to deepen business and customer understanding, mitigate cybersecurity risk, prevent fraud, improve service performance and reduce costs. Innovation is in our DNA – from technology to the way we do business. Splunk is the platform for Operational Intelligence!
Splunk has more than 2,700 global employees, with headquarters in San Francisco, an office in San Jose, CA and regional headquarters in London and Hong Kong.
We’ve built a phenomenal foundation for success with a proven leadership team, highly passionate employees and unique patented software. We invite you to help us continue our drive to define a new industry and become part of an innovative, and disruptive software company.
Benefits & Perks: Wow! This is really cool!
SF Only
Medical, full company paid Dental, Vision and Life Insurance, Flexible Spending and Dependent Care Accounts, Commuter Accounts, Employee Stock Purchase Plan (ESPP), 401(k), 3 weeks of PTO, sick leave, stocked micro kitchens in Splunk offices, catered lunches on Mondays, catered breakfast on Fridays, basketball hoops, ping pong, arcade games, BBQ’s, soccer, “Fun Fridays”.
Pursuant to the San Francisco Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records.
Non SF
Medical, full company paid Dental, Vision and Life Insurance, Flexible Spending and Dependent Care Accounts, Commuter Accounts, Employee Stock Purchase Plan (ESPP), 401(k), 3 weeks of PTO and sick leave. Our work environments vary by location however we believe in hosting amenities and fun activities to fuel our energy. You may find fully stocked micro kitchens, catered lunches on Mondays and breakfast on Fridays, basketball hoops, ping pong, arcade games, BBQ’s, soccer and “Fun Fridays”.
This isn’t a job – it’s a life changer – are you ready?
Individuals seeking employment at Splunk are considered without regards to race, religion, color, national origin, ancestry, sex, gender, gender identity, gender expression, sexual orientation, marital status, age, physical or mental disability or medical condition (except where physical fitness is a valid occupational qualification), genetic information, veteran status, or any other consideration made unlawful by federal, state or local laws. Click here to review the US Department of Labor’s EEO is The Law notice. Please click here to review Splunk’s Affirmative Action Policy Statement.
Splunk does not discriminate against employees or applicants because they have inquired about, discussed, or disclosed their own pay or the pay of another employee or applicant. Please click here to review Splunk’s Pay Transparency Nondiscrimination Provision.
Splunk is also committed to providing access to all individuals who are seeking information from our website. Any individual using assistive technology (such as a screen reader, Braille reader, etc.) who experiences difficulty accessing information on any part of Splunk’s website should send comments to accessiblecareers@splunk.com. Please include the nature of the accessibility problem and your e-mail or contact address. If the accessibility problem involves a particular page, the message should include the URL of that page.
Splunk doesn't accept unsolicited agency resumes and won't pay fees to any third-party agency or firm that doesn't have a signed agreement with Splunk.
To check on your application click here.
T
T

Data Engineer

The Judge Group

Philadelphia, PA
10 days ago
Philadelphia, PA
10 days ago
Location: Philadelphia, PA
Description: Our client is currently seeking a Data Engineer @ Philadelphia, PA or RTP, NC. It is remote during COVID and permanent role.

MUST HAVE SKILLS
Azure Data bricks, Talend ,Data Lake, Pyspark, Python, Azure data factory, Powershell
Technical/Functional Skills   -
•Experience in Data ingestion tools like Azure data factory and Talend
•Good understanding of Azure Databricks and the Spark eco-system
•Experience of data analysis and transformation using Python (Pyspark)
•Experience of setting up Azure data bricks and ingesting data from varied sources.  
•Experience in creation of jobs, clusters and setting up security in Azure data bricks
•Experience of using custom libraries in Azure databricks
•Comfortable coding using Python(Scala is added advantage) 
•Experience years of design and implementation experience in Big Data technologies
•Experienced with performance tuning, troubleshooting, and debugging in Azure databricks
•Familiarity with database and analytics technologies in the industry including Data Warehousing/ETL, Relational Databases
•Ability to manage and work with multiple stakeholders like customer Tech team, Business, Data Scientist
•Experience in Pharma industry will be an added advantage 
•Excellent communication skills

Roles & Responsibilities
•Ingest data using Talend or ADF
•Set up Azure databricks environment
•Transform data using Azure databricks
•Set up RBAC for user access and enable audit logs
•Creating analytic workflows that go from ETL and interactive exploration using pyspark.
•Create solutions that support a DevOps approach for delivery and operations of services

    Contact:

    ssaxena03@judge.com?subject=Data%20Engineer


    This job and many more are available through The Judge Group. Find us on the web at www.judge.com
    K
    K

    AWS Data Engineer

    Kane Partners LLC

    Philadelphia, PA
    22 days ago
    Philadelphia, PA
    $125k - $180k Per Year
    22 days ago
    $125k - $180k Per Year

    AWS Data Engineer

    Join a thriving B2C eCommerce company – perfectly poised to deal with today’s new shopping paradigms and survive beyond. Based on a model of locally based delivery/service centers, currently in more than 500 locations nationwide, and owning the supply chain from sales, product through delivery, this company is growing at fantastic pace.

    We are seeking a (mid-career to senior level) DATA ENGINEER

    • Full-time position
    • Forever 100% Remote!
    • Unlimited PTO
    • Equity (RSU)

    Overview

    We are seeking a Data Engineer responsible for data ingestion and integration. The data engineer will provide technical expertise and leadership on architecture, design, and integration of multiple datasets in a large-scale data environment. Emphasis of this position will be in developing and deploying a robust data processing pipeline and streams.

    Responsibilities

    • Responsible for designing, deploying, and maintaining analytics about the environment that process data at scale
    • Contributes design, configuration, deployment, and documentation for components that manage data ingestion, real time streaming, batch processing, data extraction, transformation, enrichment, and loading of data into a variety of cloud data platforms, including AWS and Microsoft Azure
    • Identifies gaps and improves the existing platform to improve quality, robustness, maintainability, and speed
    • Evaluates new and upcoming big data solutions and makes recommendations for adoption to extend our platform to meet advanced analytics use cases, such as predictive modeling and recommendation engines
    • Performs development, QA, and DevOps roles as needed to ensure total end to end responsibility of solutions

    Qualifications

    • 2-4 years of experience in a Data Engineering role
    • Experience building, maintaining, and improving Data Processing Pipeline / Data routing in large scale environments
    • Fluency in common query languages, API development, data transformation, and integration of data streams
    • Strong experience with large dataset platforms such as (e.g. Amazon EMR, Amazon Redshift, Elasticsearch, Amazon Athena, Azure SQL Database, Azure Database for PostgreSQL, Azure Cosmos DB)
    • Experience with data flow monitoring, batching, and streamingExperience working with variety of databases and expert using SQL
    • Experience data storage performance analysis and enhancements such as data replication, distribution, and compression
    • Creativity to go beyond current tools to deliver the best solution to the problem
    • Data QA experience is a plus
    • Experience with event-based and transaction-based system is a plus
    • Experience in producing and consuming topics to/from Apache Kafka, AWS Kinesis, or Azure Event Hubs is a plus
    • Experience with acquiring data from varied sources such as: API, data queues, flat-file, remote databases is a plus

    Culturally, this seven-year-old company strives and succeeds at providing a work environment that promotes collaboration, rewards individual heroes, and has plenty of room for career growth!

    We Offer:

    • Generous compensation structure
    • Work from Home - 100% Remote!
    • Equity (Restricted Stock)
    • Comprehensive Benefits Package
    • Unlimited PTO
    • Medical Insurance
    • Dental Insurance
    • Vision Insurance
    • Life Insurance
    • Opportunities for professional and personal growth!
    • Great team culture!
    S
    S

    Senior Data Engineer

    Syapse

    Philadelphia, PA
    10 days ago
    Philadelphia, PA
    10 days ago

    About Syapse

    Syapse is a real-world evidence company on a mission to improve outcomes for all cancer patients. By integrating complete, longitudinal, and continuously updated real-world patient data, we can provide unique insights into patients' care journeys. Our advantage derives from a decade of partnership with the world's largest Learning Health Network of innovation-driven healthcare systems. 

    Syapse enables providers to operationalize precision medicine and deliver the best care today to their patients while helping life sciences companies and regulators accelerate the development and approval of new therapies for patients tomorrow.  Together we are working toward a future in which all cancer patients have access to the best precision care.

    About the role 

    As a Sr. Data Engineer at Syapse you will be responsible for the execution of Data Quality and Data Processing Monitoring Programs within our organization. Your work ensures data processing is performed accurately and defects are found as early as possible. You will also be responsible for the visualization and reporting of key data properties within Syapse. You will be part of our Data Platform organization and frequently take responsibility for data processing, data orchestration.

    What You'll Accomplish 

    Syapse is a technologically mature organization, we are striving for technical excellence and invest heavily in the development of our co-workers. In this position, you will have ownership over data quality and data improvement. You will have an immediate impact on the company's bottom line working in the heart of the "data lab." The pipelines you are building and the insights you are gathering could be transformational for precision medicine.

    Things you'll accomplish in the first year: 

    • Develop patient quality scoring algorithm to allow Syapse to select qualified patients for curated datasets for life science companies
    • Automate data degradation and pipeline monitoring that will allow us to shut down data pipelines and recover from data defects quickly.
    • Create  Data Quality Program Dashboards to allow high visibility into data processing 

    What you bring to the table 

    • High/Expert Proficiency with Python, SQL, Spark
    • Big data technologies like Spark
    • Successful track record of manipulating, processing, and extracting value from large complex datasets
    • Positive attitude and ability to juggle competing priorities
    • Experience with Healthcare data is a plus
    • Participation in Data Quality initiatives and programs.
    • Experience building Dev tools for backend service

    Meet Sr. Software Engineer, Reba Magier: https://medium.com/syapse/meet-senior-software-engineer-reba-magier-a6341e6b31b0

    Next steps

    Syapse is a globally distributed, technology-enabled insights company. While we love meeting candidates face to face, we're committed to keeping positive momentum across all recruiting efforts during these challenging times. So long as social distancing and limited travel guidelines are in place, we'll conduct all interviews via video chat. We remain committed to providing you the best possible interview experience and opportunities to spend meaningful time getting to know our company, mission, and wonderful teammates. We appreciate your help in achieving this outcome and welcome your feedback and requests on how we can make this a reality for yourself & future candidates.


    Have a quick question about the role? Email

    careers@syapse.com

    or simply apply here.

    Posted

    12 days ago

    Description

    Data Engineer

    Benefits Data Trust (BDT) seeks a capable Data Engineer to help develop our new Google Cloud data platform and run our current data infrastructure. BDT embraces Data4Good and is a fast-growing, digitally enabled non-profit that improves benefits access for thousands of underserved people.

    As part of our Data Engineering team in our Data Science department, the Data Engineer will:

    • Build out our new GCP data platform and collaborate on architectural patterns for it with the Data Engineering team
    • Support the development of machine learning models with productionizing, monitoring and alerting tools
    • Write, update, and maintain ETL jobs across our data pipelines (mostly in Airflow)
    • Implement continuous improvements using our existing tools/technologies, which include SQL, Airflow, Python, Docker/Kubernetes, and others such as Terraform and Apache Beam. May also be expected to research and select other tools when the situation demands
    • Collaborate with internal customers to identify ongoing platform improvements (teams including Analytics, Projects, Policy, Software Engineering, and others throughout the organization)
    • Consult to software engineers on data-related changes to BDT’s suite of software applications, including schema/model design, table structure, and data collection
    • Engage with colleagues and collaborators using curiosity, critical thinking, a drive to completion, empathy, and a focus on impact
    • Follow existing data access and performance design standards for the data platform, software engineering, and all products and services accessing BDT information

    Based in our Center City Philadelphia office, the Data Engineer reports to the Director of Data Science.

    Due to COVID-19, BDT is operating under a remote-working protocol, following governmental requirements and CDC guidelines.

    Successful candidates will demonstrate the following through their experience (typically 3-5 years as a Data Engineer):

    • Communication and Relationship-building – with technical peers and some stakeholders
    • Cloud-based Solution Implementation, of data platforms and infrastructure, including event-driven architectures, microservices and pattern design, supporting compliance and regulated environments (including PII and PHI)
    • Workflow and pipeline development to ensure reliability, availability, and consistency
    • Systems Engineering – on-system service management, typically in *nix environments
    • Data Modeling and Warehousing – proficient understanding of relational data structures and schemas; some familiarity with semi-structured, unstructured (big data) schemas
    • Automation, monitoring, and alerting – creating these tools based on existing designs and frameworks; resolving bugs and issues
    • Cloud engineering – working towards certification on any of the major hyperscale cloud platforms
    • Data Encapsulation & Transfer methodologies – understands standards for file formats and transfer methods
    • Also interested in relevant experience including:
    • o Experience with BI implementations/uplifts (we currently use Looker) and/or Data Governance models and methods
    • o Machine learning techniques, productionizing machine learning models, and/or creating models

    About BDT

    Benefits Data Trust (BDT) is a national nonprofit that helps people live healthier, more independent lives by creating smarter ways to access essential benefits and services. Each year, BDT helps tens of thousands of people receive critical supports using data, technology, targeted outreach, and policy change. Since its inception in 2005, BDT has screened more than one million households and secured over $7 billion in benefits and services. BDT employs more than 200 people and provides enrollment assistance to individuals in six states, and policy assistance to states nationwide. For more information, visit bdtrust.org.

    Source: Benefits Data Trust