Wednesday, March 25, 2026

Urgent requirement of BI Analyst Engineer for Phoenix, AZ (Local Candidate Only)

Hi,

This is Diksha Chaudhary  working with Novia Infotech. We have the below contract job opportunity with one of our direct clients and would like to check if you have any resources available. Please send across the resume of your consultants along with the contact information at the earliest to diksha.c@noviainfotech.com.

Role: BI Analyst Engineer

 

Location: Phoenix, AZ – Onsite (Local Candidates Only)

Duration: 12+ Months

 

Interview type: Telephonic / video

 

Description:

We are seeking a skilled BI Analyst Engineer with strong expertise in SQL, data analytics, and regulatory reporting. The role focuses on developing dashboards, optimizing queries, and ensuring data quality to support reporting and analytics initiatives.

The ideal candidate will have hands on experience with MicroStrategy, BigQuery, and PostgreSQL, along with strong analytical skills and the ability to work in Agile environments.

Key Responsibilities

Develop, enhance, and maintain MicroStrategy dashboards and reports.
Design and optimize SQL queries using advanced techniques including CTEs and window functions.
Perform query optimization and performance tuning for large datasets.
Support regulatory reporting initiatives ensuring accuracy and timeliness.
Monitor and manage data quality controls, validations, and issue resolution.
Automate manual reporting processes to improve efficiency and reduce operational risks.
Collaborate with business stakeholders, data engineers, and product teams to deliver analytics solutions.
Drive analytics initiatives to provide actionable insights.
Continuously improve reporting frameworks and data processes.
Support FLARE regulatory reporting and data quality monitoring initiatives.

Required Skills and Qualifications

Education

Bachelor’s degree in Computer Science, Information Technology, Data Analytics, or a related field is preferred.

Experience

6–10 years of experience in BI, data analysis, or data engineering roles.
Experience working in regulatory reporting environments.
Experience working in Agile development environments.

Technical Skills

Advanced SQL expertise including CTEs, window functions, and complex query optimization.
Hands on experience with MicroStrategy dashboard development.
Strong knowledge of BigQuery and PostgreSQL.
Experience with data quality monitoring, validation, and governance processes.
Experience automating reporting workflows and processes.
Strong understanding of data analytics and reporting frameworks.

Soft Skills

Strong analytical and problem solving abilities.
Excellent communication and stakeholder collaboration skills.
Ability to work effectively in cross functional teams.
Ability to manage multiple priorities in a fast paced environment.

Preferred Qualifications

Experience supporting financial or regulatory reporting systems.
Experience improving data pipelines and reporting efficiency.
Strong understanding of data governance and compliance practices.

 

 

 

'

Diksha Chaudhary
US IT Recruiter

www.noviainfotech.com 

E: diksha.c@noviainfotech.com
A: 4421 Avenida Ln, McKinney, TX, 75070

 

 

 

 

--
You received this message because you are subscribed to the Google Groups "NoviaJobs" group.
To unsubscribe from this group and stop receiving emails from it, send an email to noviajobs+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/noviajobs/CAEm%3D8YX%3Dmu%3D_EgLXT5kTO9azOFBqwTSybsP%3DoLrfEV95v3ANuQ%40mail.gmail.com.

Fortinet Architect L3 SME for Plano, TX - Onsite



Summary:
DevOps / Cloud Engineer with around 8 years of enterprise experience delivering scalable, secure, and highly available cloud platforms across AWS and Azure.
Experienced Site Reliability Engineer (SRE), Cloud Engineer and DevOps Engineer with a strong background in cloud infrastructure automation, CI/CD pipelines, and security best practices. I am proficient in GitLab CI/CD, Jenkins, Terraform, Ansible, Docker, Kubernetes, and OpenShift. Skilled across AWS, Azure, and GCP, delivering scalable, secure, and high-performance solutions.
Expert in scripting with Python, Shell, and PowerShell, optimizing system performance using Prometheus, Grafana, and Datadog. Strong expertise in IAM management, network configuration, and secure cloud architecture. Proven leadership and problem-solving abilities, fostering cross-team collaboration to drive efficient, innovative, and resilient cloud solutions.

Technical Expertise:        
Agile Methodologies · Ansible · API Management · AWS · Azure · Backup and Recovery · Bash · CI/CD · Cloud Security · Configuration Management · Containers · Content Delivery Network · Database Management · Datadog · Docker · ELK Stacks · ELK Stack · Git · GitHub · Go · Google Cloud Platform · Grafana · Hadoop · Helm · Incident Management · Infrastructure as Code · Java · Jenkins · Kafka · Kubernetes · Linux · Load Balancing · Microservices · MySQL · Nagios · NoSQL · PostgreSQL · Prometheus · Python · Scala · Serverless Architecture · Snowflake · Spark · SVN (Subversion) · Terraform · Version Control · VMware · YAML . Maven

Education:
Master’s in Information Science from Trine University (USA) in 2023.
Bachelor’s in Computer Science Engineering (CSE) from Acharya Nagarjuna University in 2018.


Professional Summary:

Client: FIS Global - Addison, TX         Aug 2025–Till Date                              
Role: Sr. Devops Engineer / Site Reliability Engineer              
Roles and Responsibilities:
Deployed and managed Prometheus and Grafana for system metrics and alerting, improving detection of infrastructure bottlenecks.
Designed and developed high-performance backend services and automation tools using Go (Golang) to improve infrastructure reliability and operational efficiency.
Designed, implemented, and maintained CI/CD pipelines for applications deployed on SAP Business Technology Platform using tools like Jenkins and GitHub Actions.
Deploy and manage containerized applications using OpenShift platform, ensuring seamless continuous integration and delivery (CI/CD) pipelines.
Collaborated with development teams using Git and integrated it with GCP-based CI/CD tools for automated versioning and code deployment.
Automated build, test, and deployment processes for SAP BTP applications using Cloud Foundry and containerized environments.
Developed internal DevOps utilities and monitoring integrations in Go to automate operational workflows and reduce manual effort.
Troubleshot deployment failures, connectivity issues, and performance bottlenecks across BTP services and CI/CD pipelines.
Designed, developed, and maintained AWS Glue ETL jobs to process, transform, and load large-scale structured and semi-structured data.
Developed unit tests in Go using the testing framework, ensuring code reliability and maintaining high test coverage.
Automated data pipelines using AWS Glue Workflows, triggers, and schedules to ensure reliable data processing.
Implemented CI/CD pipelines using GitHub Actions, automating build, test, and deployment processes to enhance software delivery efficiency.
Integrated GitHub with Jenkins to streamline automated testing and deployment workflows, improving developer productivity.
Designed, deployed, and managed scalable Azure cloud infrastructures using Azure Virtual Machines, Virtual Networks, and Load Balancers.
Designed and deployed multicluster Kubernetes environments on AWS EKS, leveraging KCP for API aggregation and workspace management.
Developed custom CRDs, APIResourceSchemas, APIExports, and APIBindings to enable dynamic API discovery and integration with external providers.
Automated infrastructure provisioning and configuration using Terraform and Helm for consistent, repeatable deployments.
Implemented centralized logging and auditing pipelines using Fluentd, CloudWatch, and S3 for compliance and troubleshooting.
Created real-time metrics collection and alerting with Prometheus, Grafana, and AWS CloudWatch to monitor platform health and resource usage.
Integrated Cloudflare and Akamai for edge security, caching, and global traffic acceleration, including Logpush and API Shield features.
Acted as on-call SRE supporting 24/7 production workloads, handling incident triage, mitigation, and escalation.
Improved service reliability and customer experience by embedding SRE principles into platform design, ensuring applications consistently met availability and performance expectations at global scale.
Built unified observability dashboards for tenants and operators, visualizing logs, metrics, events, and tracing data.
Established fine-grained IAM roles, SSO integration, and least-privilege access controls for secure platform operations.
Automated incident response and operational tasks using custom Go-based scripts and services.
Owned incident lifecycle from detection, mitigation, RCA, prevention.
Enabled automated API/data prefetch, cache hygiene, and purge workflows to optimize content delivery and reduce latency.
Integrated SAP BTP applications with on-premise SAP systems using SAP Cloud Connector.
Configured and maintained security, role collections, and access management within SAP BTP subaccounts.
Enabled predictable service delivery by defining and operationalizing Service Level Objectives (SLOs) and SLIs, aligning engineering efforts with business priorities and user expectations.
Developed Python and Bash scripts for infrastructure automation, log parsing, and monitoring tasks.
Collaborated with developers, SREs, and security teams to align platform delivery with sprint cycles and compliance requirements.
Accelerated mean time to resolution (MTTR) by leading structured Root Cause Analysis (RCA) and translating findings into long-term platform and process improvements.
Conducted performance tuning, autoscaling, and failover testing to ensure high availability and resilience.
Managed SSL/TLS certificates, WAF policies, and OAuth2/JWT authentication for secure API and CDN endpoints.


Client: Metlife, New York, NY Nov 2024 – Jul 2025                                          
Role: Sr. Site Reliability Engineer DevOps Security / Site.
     
Roles and Responsibilities:
Designing, deploying, and managing cloud infrastructure on AWS, Azure, and Google Cloud Platform (GCP) to optimize performance, scalability, and cost-efficiency.
Provisioned and maintained AWS and Azure infrastructure, including EC2, S3, IAM, VPC, Azure Web Apps, Storage, and Active Directory.
Conducted code reviews, performance tuning, and reliability improvements for Go-based services and automation tools.
Implemented monitoring, alerting, and logging strategies for SAP BTP workloads to ensure reliability and high availability.
Implemented infrastructure automation and operational tooling using Go for monitoring, deployment, and incident management.
Built internal platform tooling and microservices in Golang to support DevOps workflows and infrastructure operations.
Managed microservices with Docker, Kubernetes, OpenShift, and Azure Kubernetes Service (AKS).
Implemented continuous integration and delivery pipelines with tools like Git, TeamCity, Octopus, and AWS Code Pipeline.
Designed and implemented automated pipelines for AWS EC2 to OCI Compute instance migration, ensuring minimal downtime and optimized performance.
Configured Prometheus to collect real-time metrics from cloud infrastructure, applications, and services for performance monitoring.
Managed infrastructure and service provisioning on SAP BTP including databases, messaging services, and connectivity services.
Provisioned and maintained cloud resources across AWS, Azure, and GCP, including EC2, S3, IAM, VPC, Azure Web Apps, and GCP Compute Engine for scalable deployments.
Developed automation scripts with PowerShell, Ansible, and Chef to streamline deployment and infrastructure management.
Defined and enforced SLOs/SLIs as part of the observability strategy, aligning system reliability targets with business objectives.
Configured Datadog for centralized log aggregation, leveraging log analytics to diagnose issues and improve application performance.
Deployed containerized applications and scaled Kubernetes clusters, enabling efficient orchestration and resource utilization.
Monitored application performance and system health using SAP Cloud ALM and logging tools.
Developed Infrastructure as Code (IaC) solutions using Terraform to automate provisioning of computer, networking, and storage resources in OCI.
Utilized Azure Recovery Vault and backups to ensure disaster recovery and data integrity.
Set up Prometheus Alert manager to trigger alerts based on predefined thresholds, ensuring quick incident response and resolution.
Proficient in using Terraform to define, provision, and manage cloud infrastructure (AWS, GCP, Azure) through code, ensuring consistent and repeatable deployment processes for scalable and secure environments.
Implemented cross-cloud networking automation, configuring OCI Virtual Cloud Network (VCN) to replicate AWS VPC architecture, ensuring seamless application migration.
Expertise in designing, configuring, and maintaining Jenkins pipelines for continuous integration and continuous delivery, automating build, test, and deployment processes to ensure faster and reliable software releases.
Utilized Python for data extraction, transformation, and analysis, leveraging libraries such as Pandas and NumPy to process large datasets.
Optimized performance and scalability with Kubernetes namespaces, load balancing, and monitoring.
Migrated repositories from ClearCase to TFS and implemented branching strategies with Git and TFS.
Automated build and deployment pipelines using Maven, SonarQube, Nexus, and Chef cookbooks.
Experienced in Elastic Beanstalk for deploying web apps and managing AWS services via Java APIs for Lambda.
Configured Grafana alerts to send notifications via Slack, email, and PagerDuty, ensuring initiative-taking issue resolution.
Managed application deployment and lifecycle management using SAP Cloud Transport Management.
Built centralized observability platforms on AWS integrating CloudWatch, ELK stack, and Datadog to enable real-time monitoring across microservices and Kubernetes clusters.
Designed and automated CI/CD workflows, including SCM, build, test, bundle, and deployment using Jenkins, Terraform, and Ansible.
Designed and implemented scalable SaaS-based CI/CD infrastructure and DevOps processes.
Proficient in Kubernetes administration, including RBAC, namespace management, and monitoring with Heapster.
Integrated Datadog APM to trace and optimize microservices, ensuring efficient request handling and response times.
Proficient in integrating Jenkins with various plugins (e.g., Git, Maven, Docker) and third-party tools to enhance automation, improve testing efficiency, and streamline deployment workflows.
Deployed and managed OpenShift clusters for on-premises and Red Hat environments.
Implemented containerized solutions with Docker and Kubernetes, optimizing cluster performance.
Automated configurations using Puppet and Ansible, deploying web servers and applications.

Client: Broadridge, Coppell, TX.       May 2022 – Oct 2024
Role: Site Reliability Engineer/ Devops Cloud Engineer                                                    

Roles and Responsibilities:
Expertise in Prometheus, Grafana, ELK Stack, Datadog, and CloudWatch for initiative-taking monitoring, logging, and incident response.
Experienced in Terraform, CloudFormation, and Ansible to automate provisioning and management of cloud resources.
DevOps Workflow encompassing all stages, beginning with SCM Commit Build, Integration Build Compiling.
Integrated monitoring and logging solutions using OCI Logging & Oracle Cloud Observability, ensuring initiative-taking issue resolution and enhanced system reliability.
Kernel tuning, Writing Shell scripts for system maintenance and file management.
Integrated observability into CI/CD pipelines, enabling shift-left monitoring and early detection of performance regressions during deployments.
Experience in Chef with configuring Chef-Repo, setting up multiple Chef Workstations, and writing Chef Cookbooks and Recipes to automate the deployment process using Spinnaker and integrated with Jenkins jobs for CD framework.
Ensured compliance with enterprise security standards and implemented secrets management for applications deployed on SAP BTP.
Skilled in integrating Git repositories with CI/CD tools (e.g., Jenkins, GitLab CI) for automated build, test, and deployment pipelines, accelerating the software delivery process.
Developed automation scripting in Python (core) using Puppet to deploy and manage Java applications across Linux servers.
Utilized Datadog security monitoring features to track vulnerabilities, detect threats, and ensure compliance with industry standards.
Integrated Grafana with multiple data sources, including Prometheus, Elasticsearch, and Datadog, for centralized monitoring.
Utilized Python for data extraction, transformation, and analysis, leveraging libraries such as Pandas and NumPy to process large datasets.
Created scripts in Python which are integrated with Amazon API to control instance operations.
Integrated Prometheus with Grafana for real-time visualization and with tools like Kubernetes and Docker for enhanced container monitoring.
Experience in conducting code reviews and resolving merge conflicts through Git, ensuring code quality and consistency while adhering to project standards and best practices.
Built and integrated RESTful APIs using Python frameworks like Flask and FastAPI for seamless communication between microservices.
Responsible for implementing containerized-based applications on Azure Kubernetes by using Azure Kubernetes (AKS), and Kubernetes cluster, which is responsible for cluster management.
Experience in Kubernetes to deploy scales, load balance, and manage Docker containers with multiple namespace versions.
Proficient in managing project dependencies using Maven’s dependency management system, integrating third-party libraries and frameworks, and ensuring compatibility and version control.
Leveraged Ansible to manage resources across various cloud platforms (AWS, Azure, etc.), enabling seamless provisioning, configuration, and management of cloud infrastructure in a hybrid environment.
Built open-source technology such as Docker, Kubernetes, and Terraform and leveraged multiple cloud platforms both public and private to deliver a consistent global platform continuously deploying applications.
Implemented IAM roles, policies, security groups, and encryption across cloud platforms to enhance security and compliance with industry standards.
Developed Maven POMs to automate the build process for new projects, as well as integrated them with third-party tools such as SonarQube and Nexus.
Developed Chef Cookbooks for multiple DB configurations to modularize and optimize final product configuration.

Client: Texcel Infotech PVT LTD, India        Jun 2018 – Dec 2021
Role: Cloud Engineer / Linux, Windows Admin                  
                                                                       
Roles and Responsibilities as Cloud Engineer:
Developed the complete End to End-to-End Jenkins pipeline for Testing, Building, create artifacts, upload to JFrog artifact server and deploy.
Developed a cross-cloud Terraform module library, enabling consistent infrastructure deployment across AWS and OCI, reducing errors by 40%.
Used Grafana dashboards to track application performance, troubleshoot bottlenecks, and optimize system health.
Architected an end-to-end automation framework for migrating 500+ AWS EC2 instances to OCI Compute, reducing manual intervention by 80%.
Automated observability infrastructure using Terraform and AWS CloudFormation, ensuring consistency and scalability across environments.
Developed and maintained Ansible Playbooks to define server configurations, application deployments, and routine system maintenance tasks, improving operational efficiency and reducing manual errors.
Executed AWS Lambda to OCI Functions migration, ensuring compatibility with OCI’s serverless architecture.
Created AWS-to-OCI migration pipelines using Jenkins and GitHub Actions, enabling seamless CI/CD for multi-cloud deployments.
Utilized Ansible for automating the configuration, deployment, and management of servers across multiple environments, ensuring consistent and repeatable infrastructure provisioning.
Worked closely with Oracle Cloud Engineering teams to optimize performance and resolve migration challenges efficiently.
Skilled in configuring Maven POM (Project Object Model) files to define project structure, build lifecycle, and plugin configurations, optimizing the build process for both small and large-scale applications.
Integrated Maven with CI/CD tools (e.g., Jenkins, GitLab CI) to automate build and test processes, ensuring continuous delivery and improving development productivity and software quality.
Automated object storage synchronization between AWS S3 and OCI Object Storage, utilizing OCI Object Lifecycle Policies for efficient data management.
Automate the provisioning of environments by writing Chef recipes, and the deployment of those environments in Docker containers.
I am skilled in utilizing tools such as Prometheus, Grafana, and kubectl to monitor cluster health, diagnose issues, and implement initiative-taking measures for resource optimization and application reliability in Kubernetes environments.
Experienced on AWS EC2, EBS, ELB scaling groups, Trusted Advisor, S3, Cloud Watch, Cloud Front, IAM, Security Groups, Auto Scaling.
Expertise in using Git for version control to manage and track code changes, ensuring efficient collaboration across distributed teams and maintaining a clean project history.
Developed a custom AWS-to-OCI security policy mapping tool, converting AWS IAM roles, policies, and security groups to OCI IAM, ensuring compliance.

Roles and Responsibilities as LINUX Administration:
Effectively planned and deployed hybrid Cloud infrastructure in a production environment.
Analyse cloud infrastructure and recommend improvements for performance gains and cost efficiency solutions.
Created the architecture and created the Cloud Formation template to facilitate deployment.
Have knowledge about Basic information about Linux OS. (File system, File configuration, Linux structure, directories.)
Roles and Responsibilities as VMWare Administration
Working on Various incidents like as ESX/ESXi server Down, Data store storage issues, Vmotion, Patching, Snapshots, HA, and DRS, etc.
Use VMware VSphereVcenter Update Manager to apply patches to ESX, ESXi and virtual machines.
Maintaining Vcenter Servers, creating Virtual Machine Templates.
Performing different ESX server & Virtual Machine related tasks like vMotion, Storage. VMotion, High Availability (HA), DRS (Distributed Resource Scheduling), Cloning, Snapshot.

Roles and Responsibilities as Windows Administration
Responsible for remote administration of 2003/2008/2012 servers in domain environment.
Service requests: Tickets regarding changes in the infrastructure, increase of memory, hard disk, Number of CPU’s, v2v migrations, installing software.

--
You received this message because you are subscribed to the Google Groups "NoviaJobs" group.
To unsubscribe from this group and stop receiving emails from it, send an email to noviajobs+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/noviajobs/CANYz_O86gDhkk3QHyfV3fjLUzbS2j1s1SZ_vJPzuboYK43-ABg%40mail.gmail.com.

Business Analyst – Finance (Banking Domain) for Stamford, CT / Southington, CT / Jericho, Long Island

Hi

My name is Rohit Chauhan, and I am a Staffing Specialist at Novia Infotech LLC. I am reaching out to you on an exciting job opportunity with one of our clients.

 

Job Title: Business Analyst – Finance (Banking Domain)

Location: Stamford, CT / Southington, CT / Jericho, Long Island (Onsite)


Role Overview

We are seeking a detail-oriented Business Analyst with strong experience in the banking and financial services domain. The ideal candidate will work closely with Finance, Risk, Compliance, and Technology teams to analyze financial data, support regulatory reporting, and enhance financial systems and processes.


Key Responsibilities

  • Perform financial analysis across banking products including deposits, loans, credit portfolios, and treasury operations
  • Support Finance and Risk teams in preparing regulatory reports such as Call Reports, CCAR, ALM, and Basel III metrics
  • Ensure financial processes comply with regulatory standards and internal audit requirements
  • Collaborate with Compliance teams to maintain process documentation and governance standards
  • Partner with Technology teams to optimize financial systems such as:
    • Core Banking Platforms (FIS, Fiserv, Temenos)
    • ERP systems
    • Data warehouses
  • Develop and maintain reports and dashboards using BI tools (Power BI, Tableau, Qlik)
  • Write and refine SQL queries, datasets, and business logic for:
    • Financial reconciliations
    • Loan analytics
    • Profitability reporting
  • Ensure data accuracy, integrity, and consistency across financial systems
  • Work with Treasury, Risk Management, Loan Operations, and Accounting teams to gather requirements and translate them into functional specifications
  • Facilitate stakeholder meetings and present insights in executive-friendly formats
  • Act as a liaison between business and IT teams during project lifecycle (planning, testing, deployment)

Required Skills & Experience

  • 2–5 years of experience as a Business Analyst or Financial Analyst in banking/financial services
  • Strong understanding of banking processes:
    • Loans
    • Deposits
    • Treasury operations
    • General Ledger (GL) accounting
    • Financial controls
  • Hands-on experience with Excel and BI tools (Power BI, Tableau, Qlik)
  • Experience working with core banking systems
  • Strong analytical thinking and problem-solving skills
  • High attention to detail with a risk-aware mindset
  • Excellent communication and documentation skills
  • Ability to work across Finance, Risk, Compliance, and IT teams
  • Comfortable handling large and complex financial datasets

Preferred Qualifications

  • Bachelor’s degree in Finance, Accounting, Information Systems, or related field
  • Experience with SQL or data analytics tools
  • Knowledge of regulatory frameworks:
    • Basel III
    • CECL
    • FFIEC
    • ALM
  • Familiarity with data governance and financial data modeling
  • Experience supporting:
    • Digital banking initiatives
    • Core system upgrades
    • Finance transformation programs

 

 

Rohit Chauhan

IT Recruiter

E: rohit.c@noviainfotech.com

www.noviainfotech.com

A: 4421 Avenida Ln, McKinney, TX, 75070

 

 

 

 

--
You received this message because you are subscribed to the Google Groups "NoviaJobs" group.
To unsubscribe from this group and stop receiving emails from it, send an email to noviajobs+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/noviajobs/CAJ0-OE95G%3DMt9RnH_%2BU%2BFq0rR%3DSunOkyRTct5qKqN_%3DXAP8FPw%40mail.gmail.com.

Urgent requirement of BI Analyst Engineer for Phoenix, AZ (Local Candidate Only)

Hi, This is Diksha Chaudhary  working with Novia Infotech. We have the below contract job opportunity with one of our direct clients a...