Splunk Infrastructure Engineer
T Rowe Price Group Inc
Contract Owings Mills, Maryland, United States Posted 3 months ago
About Position
Splunk Infrastructure Engineer (Contract)
$80.00 / Hourly
Owings Mills, Maryland, United States
Splunk Infrastructure Engineer
Contract Owings Mills, Maryland, United States Posted 3 months ago
Skills
• 3 years’ experience managing and configuring Splunk Enterprise and/or Splunk Cloud • Experience with Splunk clustered deployment topology • Experience with Linux and Windows agents for Splunk administration • Experience in designing developing and deploying cloud-based solutions using AWS • Experience in onboarding new data configuration creating new dashboards extracting information through Splunk • Experience with writing or modifying custom Splunk addons • Demonstrated proficiency with scripting and automation (bash python other programming languages) • Familiarity with Splunk rest API’s • Strong scripting skills (e.g. Python Bash) for automation and custom development. • In-depth knowledge of log management data onboarding and SIEM principles.Description
This is a hybrid position - candidates must be able to work onsite in Owings Mills on Mondays & Tuesdays every week (no exceptions), with the other three days per week remote.
Core skills/requirements for this position are: Splunk, AWS (especially DC2 & DNS), and Ansible for automation.
We are looking for candidates with heavy Splunk administration experience. Someone with proven experience building out infrastructure relating to Splunk.
A Splunk certification is preferred but not required.
Responsibilities
- • Splunk Certification (Admin or Architect)
- • Experience with Ansible tower automations
- • Experience using Gitlab
- • Experience with large platform migration efforts
- • Experience with AWS OpenSearch
- • Experience with Cribl
- • Expertise in language such as Java, Python. Implementation knowledge in data processing pipelines using programming languages like Java and Python to extract, transform, and load (ETL) data
- • Create and maintain data models, ensuring efficient storage, retrieval, and analysis of large datasets
- • Troubleshoot and resolve issues related to data processing, storage, and retrieval.
- • 3-5 years’ Experience in designing, developing, and deploying data lakes using AWS native services (S3, Glue (Crawlers, ETL, Catalog), IAM, Terraform, Athena)
- • Experience in development of systems for data extraction, ingestion and processing of large volumes of data
- • Experience with data pipeline orchestration platforms
- • Experience in Ansible/Terraform/Cloud Formation scripts and Infrastructure as Code scripting is required
- • Implement version control and CI/CD practices for data engineering workflows to ensure reliable and efficient deployments
- • Proficiency in implementing monitoring, logging, and alerting solutions for data infrastructure (e.g., Prometheus, Grafana)
- • Proficiency in distributed Linux environments
By applying to a job using PingJob.com you are agreeing to comply with and be subject to the PingJob.com Terms and Conditions for use of our website. To use our website, you must agree with the Terms and Conditions and both meet and comply with their provisions.