Breeze-AI-Devops ![Technical Architecture](./site/Technical_architecture_BreezeAI_framework.png) # Steps to spinup Infrastructure Step-1: Get a instace from a client and clone this code & perform below steps ```bash cd Breeze-AI-Devops/terraform/vpc-jump-apps-nat # fill out the project name,instance_type,region and other necessary details in variables.tf terraform init terraform plan terraform apply ``` ### This will create vpc,subnet,igw,nat,ec2 & installs necessary apps in it. And also adds pem file in this location. Step-2: SSH into the instance which is created using the .pem file generated after the terraform apply command. Step-3: Go into the path "Breeze-AI-Devops/eks-efs-scripts" and fillout the necessary variables in create-cluster.sh --> ./create-cluster.sh * This will create eks cluster with desired managed nodes and adds necessary add ons like ebs-csi. Step-4: (Optional) In case if you need self managed nodes, use below config ```bash apiVersion: eksctl.io/v1alpha5 kind: ClusterConfig metadata: name: demo-eks-cluster region: us-east-1 version: "1.35" vpc: id: subnets: public: us-east-1a: id: us-east-1b: id: nodeGroups: - name: demo-nodegroup instanceType: t3.large desiredCapacity: 1 volumeSize: 70 ssh: publicKeyPath: ~/.ssh/id_rsa.pub subnets: - - privateNetworking: false # ensures nodes get public IPs ``` --> eksctl get cluster --region us-east-1 Step-5: Run below command to update the kubeconfig file for your EKS cluster in the jump server ```bash aws eks --region us-east-1 update-kubeconfig --name ``` Step-6: Once the eks cluster is created, Push all images to client ECR , make sure your certificate exist in aws certificate manager and note down its ARN. Step-7: Install kong ingress controller using below steps ```bash helm repo add kong https://charts.konghq.com helm repo update helm install kong kong/ingress -n kong --create-namespace ``` * This will create classic loadbalancer --> kubectl get svc -n kong (note down the port number for the loadbalancer 443 port) Step-8: Get the name of the autoscaling group and launch template ID from the EKS cluster we created.Run below commands ```bash aws autoscaling describe-auto-scaling-groups --query "AutoScalingGroups[].AutoScalingGroupName" --output text | grep -i aws autoscaling describe-auto-scaling-groups \ --auto-scaling-group-names \ --region us-east-1 \ --query "AutoScalingGroups[0].MixedInstancesPolicy.LaunchTemplate.LaunchTemplateSpecification" ``` Step-9: Go to path "Breeze-AI-Devops/terraform/alb-asg" and update all default values in variables.tf and also add your existing autoscaling group name in it. Execute below commands ```bash terraform init terraform import aws_autoscaling_group.existing_eks_asg terraform init terraform validate terraform plan terraform apply ``` * This will create new ALB, Target Group , Security Group & integrates current autoscaling group with this newly created load balancer" Step-10: In the node security group, allow all traffic from load balancer's security group for our node health check to be successful. A test nginx pod is created automtically, grab its ingress domain and map the domain in Route53 and access it in browser Step-11: cd to different microservices directory like breezeai-webui, isometric-backend, Redis and fill out all necessary variables and run terraform init , terraform plan , terraform apply . * NOTE: Redis, breezeai-webui, isometric-backend can be created by terraform. Postgres & n8n needs to be applied manually by going inside respective microservices folder and Manifests folder. Fill out the file system id and access point in the respective pv.yaml files Step-12: Goto neo4j path and run below commands ```bash helm repo add neo4j https://helm.neo4j.com/neo4j helm repo update ``` * Put relevant certificates like cert.pem, privkey.pem and fullchain.pem of your *.domain.com or xyz.domain.com in certs folder & execute below command to create secret ```bash kubectl create secret tls neo4j-cert --key ./certs/privkey.pem --cert ./certs/fullchain.pem ``` * And give the reference of secret in the neo4j-values.yaml file and before creating neo4j through helm, first run below patch command to make gp2 as default storage class & ```bash kubectl patch storageclass gp2 \ -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' ``` * Apply the neo4j-values.yaml file to create neo4j ```bash helm upgrade --install neo4j neo4j/neo4j -f neo4j-values.yaml helm apply -f neo4j-ing.yaml ``` * After creating neo4j, we need to edit neo4j sts and add below plugins in the env vars ```bash kubectl edit sts neo4j - name: NEO4J_PLUGINS value: '["apoc", "apoc-extended","graph-data-science"]' - name: NEO4J_dbms_security_procedures_allowlist value: apoc.*,gds.* ``` Step-13: Grab the classic loadbalancer dns you got by applying neo4j-values.yaml file and map it with your domain neo4j.xyz.com in route53. next in the security group of this newly created loadbalancer, and edit the health check and add the TCP port of the loadbalancer equivalent of 7687 in it and save. In the security group of eks nodes, allow all traffic from the security group of neo4j classic load balancer. Access the neo4j in browser and input the username and password and verify you can login Step-14: login inside postgres db and create databases and add required extensions Step-15: configure cronjob for pg_isometric & pg_n8n dbs backup Step-16: Configure Keycloak by discussing with developers Step-17: Add your application code in the repo . Example "breezeai-webui" folderin this repo Step-18: Add the necessary env vars in the secrets and variables in this repo settings. Step-19: Refer the github actions to AWS authentication steps in github-actions/github-actions-readme.md file Step-20: Github actions pipeline is kept in the .github folder and we can start the actions manually in the "actions" tab or automate it to trigger as soon as code is pushed.