Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Topics - johndchch

#1
despite having the option 'do not pin engine packet processors to dedicated CPU cores' selected I can see that eastpect is running ONLY on cpu 0 ( and watching core loading with htop whilst stress testing confirms this )

cpuset -g on the easptect pid gives this :

Eastspect Instance 0 pid= 61127
current cpu affinity
pid 61127 mask: 0
pid 61127 domain policy: first-touch mask: 0

if I manually set the cpu mask to all cores using cpuset I see what I would expect and much more even cpu utilisation at high loads

Eastspect Instance 0 pid= 61127
current cpu affinity
pid 61127 mask: 0, 1, 2, 3, 4, 5, 6, 7
pid 61127 domain policy: first-touch mask: 0

If I un-tick 'do not pin' it sets affinity to cpu 1 as previously, so looks like in the new version there's been some sort of regression in the 'do not pin' code ( ticking it now basically pins the eastpect process to cpu 0 - which is obviously NOT what is intended )
#2
whilst troubleshooting very uneven core loading I noticed that each eastpect instance seems to be locked to a single core

e.g.

cpuset -g -p <pid of eastpect instancle 0>
pid 17862 mask: 1
pid 17862 domain policy: first-touch mask: 0

I presume this is done to either aid latency or to allow for a multiple interfaces ( and hence multiple eastpect instances )

question is - for a single LAN interface config ( so single eastpect instance ) would setting the mask to all available cores make more sense?

A few quick experiments changing the mask to all cores seems to improve the single core overloads I was seeing, and doesn't seem to affect performance in any negative manner
#3
just grabbed the rc1 image and went to install into into a vm for testing - configing the vm for freebsd 13 64-bit the installer launches ok but when it gets to partitioning ( using the default UFS option ) whilst it sees the virtual hdd it fails with errors about being unable create partitions

editting the vm settings to use the LSI sas virtual driver rather than the default vmware paravirtual drivers ( paravirtual is the default is for freebsd 13 on esxi7 - lsi sas is the default for freebsd 12) lets the partitioner run and the install then goes ok, and once installed I switched back to the paravirtual driver and the system still boots and runs fine ( so looks to be an issue only with the installer environment and/or the partitioning tool - not the running system's paravirtual kernel drivers )