Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

k8sgpt opeartor not working with local-ai server: why receiving panic error in k8sgpt operator pod #355

Open
3 of 4 tasks
szemmour-rh opened this issue Feb 22, 2024 · 1 comment

Comments

@szemmour-rh
Copy link

Checklist

  • I've searched for similar issues and couldn't find anything matching
  • I've included steps to reproduce the behavior

Affected Components

  • K8sGPT (CLI)
  • K8sGPT Operator

K8sGPT Version

No response

Kubernetes Version

No response

Host OS and its Version

No response

Steps to reproduce

1.Install the LocalAI server
helm install local-ai go-skynet/local-ai -f values.yaml

`cat <<EOF > values.yaml

deployment:
image: quay.io/go-skynet/local-ai:latest
env:
threads: 14
contextSize: 512
modelsPath: "/models"

Optionally create a PVC, mount the PV to the LocalAI Deployment,

and download a model to prepopulate the models directory

modelsVolume:
enabled: true
url: "https://gpt4all.io/models/ggml-gpt4all-j.bin"
pvc:
size: 6Gi
accessModes:
- ReadWriteOnce
auth:
# Optional value for HTTP basic access authentication header
basic: "" # 'username:password' base64 encoded
service:
type: ClusterIP
annotations: {}

If using an AWS load balancer, you'll need to override the default 60s load balancer idle timeout

service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "1200"

EOF

`
2.Install the K8sGPT operator
helm repo add k8sgpt https://charts.k8sgpt.ai/
helm install k8sgpt-operator k8sgpt/k8sgpt-operator

`kubectl -n local-ai apply -f - << EOF
apiVersion: core.k8sgpt.ai/v1alpha1
kind: K8sGPT
metadata:
name: k8sgpt-local
namespace: local-ai
spec:
backend: localai

use the same model name here as the one you plugged

into the LocalAI helm chart's values.yaml

model: ggml-gpt4all-j.bin

kubernetes-internal DNS name of the local-ai Service

baseUrl: http://local-ai.local-ai.svc.cluster.local:8080/v1

allow K8sGPT to store AI analyses in an in-memory cache,

otherwise your cluster may get throttled :)

noCache: false
version: v0.2.7
enableAI: true
EOF`

Expected behaviour

the K8sGPT operator should deploy K8sGPT and start communicating with local-ai server to load the model

Actual behaviour

oc logs k8sgpt-operator-1-controller-manager-6d6cc59fcc-zrxkg

gives:

2024-02-22T04:36:20Z    INFO    Observed a panic in reconciler: runtime error: invalid memory address or nil pointer dereference  {"controller": "k8sgpt", "controllerGroup": "core.k8sgpt.ai", "controllerKind": "K8sGPT", "K8sGPT": {"name":"k8sgpt-local","namespace":"k8sgpt"}, "namespace": "k8sgpt", "name": "k8sgpt-local", "reconcileID": "b2d9f9c9-b6a8-4227-9f0e-209ad0ce730e"}
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
        panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0x16ee00d]

goroutine 129 [running]:
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile.func1()
        /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:115 +0x1e5
panic({0x18af4e0?, 0x2b4bf60?})
        /usr/local/go/src/runtime/panic.go:914 +0x21f
github.com/k8sgpt-ai/k8sgpt-operator/controllers.(*K8sGPTReconciler).Reconcile(0xc0004f51a0, {0x1db0e98, 0xc0005ffb90}, {{{0xc000529a90?, 0x0?}, {0xc000529a80?, 0x410785?}}})
        /workspace/controllers/k8sgpt_controller.go:141 +0x5ed
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile(0x1db0e98?, {0x1db0e98?, 0xc0005ffb90?}, {{{0xc000529a90?, 0x17e9e20?}, {0xc000529a80?, 0xc000062bc0?}}})
        /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:118 +0xb7
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc0002e74a0, {0x1db0ed0, 0xc0002b5cc0}, {0x193e5a0?, 0xc000062bc0?})
        /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:314 +0x368
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc0002e74a0, {0x1db0ed0, 0xc0002b5cc0})
        /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:265 +0x1af
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2()
        /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:226 +0x79
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2 in goroutine 98
        /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:222 +0x565

Additional Information

following the article here:

https://itnext.io/k8sgpt-localai-unlock-kubernetes-superpowers-for-free-584790de9b65

@arbreezy
Copy link
Member

hey @szemmour-rh this blog post is outdated and the K8sGPT CR has chnaged significantly; I suggest you go through the README and try again with your local LLM deployment

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants