Replies: 2 comments 2 replies
-
The LinkedListNNPS error only suggests that your particles moved very far apart due to extremely large accelerations. To see if your kernel was used you can look at the log file or look at the generated Cython code and see if indeed your Kernel has been used. |
Beta Was this translation helpful? Give feedback.
2 replies
-
Dear Stephan,
The implementation process is correct. You do not require to do it in
c_kernel etc.
See your simulation and figure out why your simulation is blowing up. I
suggest look at density and volume values. Just calculate density using
your kernel first.
Regards,
Pawan
…On Tue, Apr 9, 2024, 5:21 AM Stephan ***@***.***> wrote:
The log file shows indeed that I am using the kernel that I wanted to use.
Unfortunately, it doesn't show if I incorporated the kernel correctly. What
would be the procedure for implementing a custom kernel correctly? For
implementing your own equations or integrators there are many examples in
the documentation to verify your own work, but for custom kernels this is
not the case. What I did is simply add an additional class to my python
script called ConvexKernel. This class was just a copy of the
QuinticSpline class and the methods were adjusted according to the paper.
I didn't include a get_deltap method since the convex kernel doesn't have
an inflection point (maybe this causes the code to crash; does PySPH
require a get_deltap method in a kernel class?) Below you can see my
implementation of the convex kernel as presented in the paper. I simply put
this piece of code in my simulation file, e.g. in the
elliptical_drop_no_scheme.py file from the test cases, and I used kernel =
ConvexKernel(dim=2) in the create_solver method.
class ConvexKernel(object):
def __init__(self, dim=2):
self.radius_scale = 2.0
self.dim = dim
if dim == 1:
self.fac = 5.0 / 48.0
elif dim == 2:
self.fac = 15.0 / (M_PI * 64.0)
elif dim == 3:
self.fac = 21.0 / (M_PI * 128.0)
def kernel(self, xij=[0., 0, 0], rij=1.0, h=1.0):
h1 = 1. / h
q = rij * h1
# get the kernel normalizing factor
if self.dim == 1:
fac = self.fac * h1
elif self.dim == 2:
fac = self.fac * h1 * h1
elif self.dim == 3:
fac = self.fac * h1 * h1 * h1
tmp2 = 2.0 - q
tmp1 = 0.5 * q + 1
if (q > 2.0):
val = 0.0
else:
val = tmp2 * tmp2 * tmp2 * tmp1
return val * fac
def dwdq(self, rij=1.0, h=1.0):
h1 = 1. / h
q = rij * h1
# get the kernel normalizing factor
if self.dim == 1:
fac = self.fac * h1
elif self.dim == 2:
fac = self.fac * h1 * h1
elif self.dim == 3:
fac = self.fac * h1 * h1 * h1
tmp2 = 2.0 - q
if (q > 2.0):
val = 0.0
else:
val = -1.5 * tmp2 * tmp2
return val * fac
def gradient(self, xij=[0., 0., 0.], rij=1.0, h=1.0, grad=[0, 0, 0]):
h1 = 1. / h
# compute the gradient.
wdash = self.dwdq(rij, h)
tmp = wdash * h1 / rij
grad[0] = tmp * xij[0]
grad[1] = tmp * xij[1]
grad[2] = tmp * xij[2]
def gradient_h(self, xij=[0., 0, 0], rij=1.0, h=1.0):
h1 = 1. / h
q = rij * h1
# get the kernel normalizing factor
if self.dim == 1:
fac = self.fac * h1
elif self.dim == 2:
fac = self.fac * h1 * h1
elif self.dim == 3:
fac = self.fac * h1 * h1 * h1
tmp2 = 2.0 - q
tmp1 = 0.5 * q + 1
# compute the kernel & gradient at q
if q > 2.0:
w = 0.0
dw = 0.0
else:
w = tmp2 * tmp2 * tmp2 * tmp1
dw = -1.5 * tmp2 * tmp2
return -fac * h1 * (dw * q + w * self.dim)
—
Reply to this email directly, view it on GitHub
<#396 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AHBSJSEG7TDOCGOSR5IB7ZDY4O6L7AVCNFSM6AAAAABFTXI2CWVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4TANJXGMZDA>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I tried to incorporate a custom kernel (convex kernel) which I found in: https://pure.tue.nl/ws/files/3858217/376670351851652.pdf. I simply copied the
QuinticSpline
class from thepysph.base.kernels
module and adjusted the functions, but trying to run the code gives rise to a crash with the following error message:I suppose this is due to the way I try to implement my own kernel, so what would be the best way to go about?
Beta Was this translation helpful? Give feedback.
All reactions