Table of Contents

ONAP AAI

1. Quick Helm

1.1 Remove old chart in repo

refresh_helm.sh

# !/bin/bash
# Remove all files in these directories.
rm -rf ~/.helm/cache/archive/*
rm -rf ~/.helm/repository/cache/*
# Refreash repository configurations
helm repo update
#That's all.
#If you "helm search" next time, you can find newest stable charts in repository.

then install charts to local repo again.

1.2 helm rendef - generate deployment yaml for kubectl apply

    helm template --values onap-k8s-5.0.0-td.yaml --output-dir ./helm_rendef ../oom/kubernetes/onap
    
    wrote ./helm_rendef/onap/charts/aai/charts/aai-babel/templates/secrets.yaml                                     
wrote ./helm_rendef/onap/charts/aai/charts/aai-data-router/templates/secret.yaml                                
wrote ./helm_rendef/onap/charts/aai/charts/aai-modelloader/templates/secret.yaml                                
wrote ./helm_rendef/onap/charts/aai/charts/aai-search-data/templates/secret.yaml                                
wrote ./helm_rendef/onap/charts/aai/charts/aai-sparky-be/templates/secret.yaml                                  
wrote ./helm_rendef/onap/charts/aai/templates/secret.yaml                                                       
wrote ./helm_rendef/onap/charts/mariadb-galera/templates/secrets.yaml                                           
wrote ./helm_rendef/onap/charts/so/charts/so-db-secrets/templates/secrets.yaml                                  
wrote ./helm_rendef/onap/charts/so/charts/so-mariadb/templates/secrets.yaml                                     
wrote ./helm_rendef/onap/templates/secrets.yaml                                                                 
wrote ./helm_rendef/onap/charts/aai/charts/aai-babel/templates/configmap.yaml                                   
wrote ./helm_rendef/onap/charts/aai/charts/aai-data-router/templates/configmap.yaml                             
wrote ./helm_rendef/onap/charts/aai/charts/aai-elasticsearch/templates/configmap.yaml            

then apply recursively

<del>kubectl apply (-f FILENAME | -k DIRECTORY) [options]</del>
kubectl apply --recursive --filename helm_rendef/onap/charts/

Namespace may have been set before using the values.yalm

1.3 Download chart

  helm fetch --untar stable/wordpress
  cd wordpress

1.4 Helm repo

Setup

Troubleshooting

1. Deploy after failed deployment from v2.7

For helm 2.7< <3.0 (https://stackoverflow.com/a/51780556/707704):

helm undeploy --purge
helm deploy dev local/onap --namespace onap -f onap-k8s-5.0.0-td.yaml --force

Redeploy failed helm release:
helm upgrade –install dev-mariadb-galera local/onap –force

Sometime there is error:

   Error: UPGRADE FAILED: kind Secret with the name "onap-docker-registry-key" already exists in the cluster and wasn't defined in the previous release. Before upgrading, please either delete the resource from the cluster or remove it from the chart
   

Fix the above error (existing secrets):

  kubectl delete secrets -n onap onap-docker-registry-key
  

1.1 Eventually remove everything

   helm undeploy dev --purge

2. Initcontainer must finish

Check pods status:

kubectl describe pods -n onap dev-aai-aai-graphadmin-create-db-schema-659zz

Check if initcontainer is successfully started. If not, check its logs:

   kubectl logs -n onap dev-aai-aai-graphadmin-create-db-schema-659zz aai-graphadmin-readiness
  

It logs show the cause:

2019-11-23 20:01:46,807 - INFO - Checking if cassandra  is ready
2019-11-23 20:01:46,810 - ERROR - Exception when calling list_namespaced_pod: (403)
Reason: Forbidden
HTTP response headers: HTTPHeaderDict({'Date': 'Sat, 23 Nov 2019 20:01:46 GMT', 'Content-Length': '276', 'Conte$
t-Type': 'application/json', 'X-Content-Type-Options': 'nosniff'})
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"pods is forb$
dden: User \"system:serviceaccount:onap:default\" cannot list resource \"pods\" in API group \"\" in the namesp$
ce \"onap\"","reason":"Forbidden","details":{"kind":"pods"},"code":403}

Create rolebinding (kubeadm 1.15.5)

$ cat onap_conf_extra/onap_rolebinding.yaml 
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: onap-default-cluster-admin-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: default
  namespace: onap

3. Roles binding accessing api inside a pod

4. Delete all failed pods

  kubectl -n onap delete pods --field-selector=status.phase=Failed