- Trials In Tainted Space Time Limited Edition
- Trials In Tainted Space Time Limits
- Trials In Tainted Space Time Limited
At Buffer, we've been using Kubernetes since 2016. We've been managing our k8s (kubernetes) cluster with kops, it has about 60 nodes (on AWS), and runs about 1500 containers. Our transition to a micro-service architecture has been full of trial and errors. Even after a few years running k8s, we are still learning its secrets. This post will talk about how something we thought was a good thing, but ended up to be not as great as we thought: CPU limits.
A play through of Trials in Tainted Space where I show the basics of the game mechanics, story, combat, and general information for new players. Limit bot activity to periods with less than 10k registered users online. Server Time: Jan 10, 2021 11:07 PM This website uses cookies to enhance your browsing experience. Haswell Quest Time Limit? Discussion in 'Trials in Tainted Space' started by Gammerick, Oct 4, 2016. Forums Adult Games Trials in Tainted Space.
CPU limits and Throttling
Trials In Tainted Space Time Limited Edition
It is s general recommendation to set CPU limits. Google, among others, highly recommends it. The danger of not setting a CPU limit is that containers running in the node could exhaust all CPU available. This can trigger a cascade of unwanted events such as having key Kubernetes processes (such as kubelet
) to become unresponsive. So it is in theory a great thing to set CPU limit in order to protect your nodes.
CPU limits is the maximum CPU time a container can uses at a given period (100ms by default). The CPU usage for a container will never go above that limit you specified. Kubernetes use a mechanism called CFS Quota to throttle the container to prevent the CPU usage from going above the limit. That means CPU will be artificially restricted, making the performance of your containers lower (and slower when it comes to latency).
What can happen if we don't set CPU limits?
We unfortunately experienced the issue. The kubelet
, a process running on every node, and in charge of managing the containers (pods) in the nodes became unresponsive. The node will turn into a NotReady
state, and containers (pods) that were present will be rescheduled somewhere else, and create the issue in the new nodes. Definitely not ideal isn't it?
Discovering the throttling and latency issue
A key metric to check when you are running container is the throttling
. This indicate the number of time your container has been throttled. Interestingly, we've discovered a lot of containers having throttling no matter if the CPU usage was near the limits or not. Here the example of one of our main API:
You can see in the animation that the CPU limits is set at 800m
(0.8 core, 80% of a core), and the peak usage is at most 200m
(20% of a core). Software free download for pc. After seeing, we might think we have plenty of CPU to let the service running before it throttle right? . Now check this one out:
You can notice the CPU throttling occurs, even though the CPU usage is below the CPU Limits. The maximum CPU usage isn't even near the CPU limits.
We've then found a few resources(github issue, Zalando talk, omio post) talking about how throttling lead to poorer performances and latency for your services.
Why do we see CPU throttling while CPU usage is low?The tldr is basically a bug in the Linux kernel throttling unecessarly containers with CPU limit. If you're curious about the nature of it, I invite you to check Dave Chiluk's great talk, a written version also exists with more details.
Removing CPU limit (with extra care)
After many long discussions, we've decided to remove the CPU limits for all services that were directly or indirectly on the critical path of our users.
This wasn't an easy decision since we value the stability of our cluster. We've experimented in the past some instability in our cluster with services using too much resources and disrupting all other services present in the same node. That time was a bit different, we understood more about how our services needed to be and had a good strategy to roll this out.
How to keep your nodes safe when removing limits ?
Isolating 'No CPU Limits' services:
In the past we've seen some nodes going to a notReady
state, mainly because some services were using too much resources in a node.
We've decided to put those services on some specific nodes (tainted nodes), so those services will not disrupt all the 'bounded' ones. We have better control and could identify easier if any issue occurs with a node. We did this by tainting some nodes and adding toleration to services that were 'unbounded'. Check the documentation on how you can do that.
Assigning the correct CPU and memory request:
The main worry we had was service using too much resources and leading to nodes becoming unresponsive. Because we now had great observability of all services running in our cluster (with Datadog), I've analyzed a few months of usage of each service we wanted to 'unbound'. I've assigned the maximum CPU usage as the CPU request with a > 20% margin. This will make sure to have allocated space in a node. If k8s won't try to schedule any other service in that node.
You can see in the graph that the peak CPU usage was 242m
CPU core (0.242 CPU core). Simply take this number and make it a bit higher to become the CPU request. You can notice that since the service is user facing, the peak CPU usage matches peak traffic time.
Do the same with your memory usage and requests, and you will be all set!To add more safety, you can use the horizontal pod autoscaler to create new pods if the resource usage is high, so kubernetes will schedule it in nodes that have room for it. Set an alert if your cluster do not have any room, or use the node austoscaler to add it automatically.
The downsides are that we lose in 'container density', the number of containers that can run in a single node. We could also end up with a lot of 'slack' during a low traffic time.You could also hit some high CPU usage, but nodes autoscaling should help you with it.
Results
I'm happy to publish really great results after few weeks of experimentation, we've already seen really great latency improvements across all the services we've modified:
The best result happened on our main landing page (buffer.com) where we speed the service up to 22x faster!
Is the Linux kernel bug fixed?
The bug has been fixed and merged into the kernel for Linux distribution running 4.19 or higher (kudo again to Dave Chiluk for finding and fixing that).
However, as for September 2nd 2020, when reading the kubernetes issue, we can see various Linux projects that keep referencing the issue, so I guess some Linux distribution still have the bug and working into integrating the fix.
If you are below a Linux distribution that has a kernel version below 4.19, I'd recommend you to upgrade to the latest Linux distribution for your nodes, but in any case, you should try removing the CPU limits and see if you have any throttling. Here a non exhausting list of various managed Kubernetes services or Linux distribution:
- Debian: The latest version buster has the fix, it looks quite recent (august 2020). Some previous version might have get patched
- Ubuntu: The latest version Ubuntu Focal Fosa 20.04 has the fix.
- EKS has the fix since December 2019. Upgrade your AMI if you have a version below than that
- kops: Since June 2020,
kops 1.18+
will start usingUbuntu 20.04
as the default host image. If you're using a lower version of kops, you'll have to probably to wait the fix. We are currently in this situation. - GKE (Goggle Cloud) : The kernel fix was merged in January 2020. But it does looks like throttling are still hapenning
ps: Feel free to comment if you have more precise information, I'll update the post accordingly
If the fix solved the throttling issue?
I'm unsure if totally solved the issue. I will give it a try once we hit a kernel version where the fix has been implemented and will update this post accordingly. If anyone have upgrade I'm keen to hear their results.
Takeaways
- If you run Docker containers under Linux (no matter Kubernetes/Mesos/Swarm) you might have your containers underperforming because of throttling
- Upgrade to the latest version of your distribution hoping the bug is fixed
- Removing CPU limit is a solution to solve this issue, but this is dangerous and should be made with extra-care (prefer upgrading your kernel first and monitor throttling first)
- If you remove CPU limits, carefully monitor CPU and memory usage in your nodes, and make sure your CPU requests are
- A safe way to is to use the Horizontal pod autoscaler to create new pods if the resource usage is high, so kubernetes will schedule it in nodes that have space.
👉Hacker news update: lot of insighful comments. I've updated the post to have better recommendations. You should prefer upgrading your kernel version over removing the CPU limits. But throttling will still be present. If your goal is low latency, remove CPU limits, but be really mindful when doing this: set the proper CPU requests, add the necessary monitoring when you do this. Read the comment written by Tim Hockin from Google (and one of Kubernetes creator)
I hope this post helps you get performance gains on the containers you are running. If so, don't hesitate to share or comment with always some insighful comments
Special thanks to Dmitry, Noah and Andre that adviced me on this.
Next reads:
👉 Why you should have a side project
👉 How we share technical knowledge in a remote team, across timezones
Professional Reference articles are designed for health professionals to use. They are written by UK doctors and based on research evidence, UK and European Guidelines. You may find the Lumbar Puncture (Spinal Tap) article more useful, or one of our other health articles.
Treatment of almost all medical conditions has been affected by the COVID-19 pandemic. NICE has issued rapid update guidelines in relation to many of these. This guidance is changing frequently. Please visit https://www.nice.org.uk/covid-19 to see if there is temporary guidance issued by NICE in relation to the management of this condition, which may vary from the information given below.
Cerebrospinal Fluid
In this articleCerebrospinal fluid (CSF) is found in the subarachnoid space of the brain (within the ventricles) and spinal canal. Delphi auto diagnose software download. It is produced by the choroid plexus in the ventricles of the brain and the cerebral vessels, at the rate of 500 ml/day. Production matches reabsorption so, at any one time in an adult, the average volume of CSF is about 150 ml.
Trending Articles
Indications for lumbar puncture[1, 2]
For information on performing a lumbar puncture and sampling, see the separate Lumbar Puncture article.
- To investigate or exclude meningitis: bacterial, viral, tuberculous, cryptococcal, chemical, carcinomatous.
- To exclude subarachnoid haemorrhage in acute severe headache.
- To investigate neurological disorders: multiple sclerosis, sarcoidosis, Guillain-Barré syndrome, chronic inflammatory demyelinating polyneuropathy, mitochondrial disorders, leukoencephalopathies, paraneoplastic syndromes.
- To demonstrate and manage disorders of intracranial pressure: idiopathic intracranial hypertension, spontaneous intracranial hypotension.
- To administer therapeutic or diagnostic agents: spinal anaesthesia, intrathecal chemotherapy, intrathecal antibiotics, intrathecal baclofen, contrast media in myelography or cisternography.
Analysis
It is helpful to note the appearance of CSF and the opening pressure (normal 10-20 cm H2O). Samples are usually sent for:
Biochemistry
- Protein - high (>0.4 g/L) levels seen in infection and infiltration disorders (falsely high results are seen if the sample is contaminated with blood). Highly elevated levels (>1 g/L) are seen in Guillain-Barré syndrome and tuberculous meningitis.
- Glucose - a blood sample for glucose should be taken at the same time as the lumbar puncture. CSF glucose is usually 60-80% of plasma glucose. A reduced level implies there is increased uptake of glucose in the CNS - eg, presence of micro-organisms.
Microscopy, culture and sensitivity
- Cell count - white cells with differential (neutrophils and lymphocytes) and red cells. When performing a lumbar puncture, red cells may be present as a result of damage to a blood vessel during the procedure (commonly called a 'bloody tap'). In these instances, the initial CSF is red but this is followed by clearer CSF.
- Gram stain - for bacterial organisms.
- Culture - if appropriate.
Additional investigations
- Xanthochromia - yellow appearance of centrifuged CSF resulting from red cell breakdown products, oxyhaemoglobin and bilirubin and representing a high likelihood of subarachnoid haemorrhage. This may be visualised by the naked eye but the use of spectrophotometry has superseded this. It is the last of three consecutively obtained samples which is examined.
- Oligoclonal bands - seen in multiple sclerosis and neurosyphilis.
- Virology.
- Cytology - requires larger volumes of CSF than other tests.
- Polymerase chain reaction (PCR) - eg, for tuberculosis (TB), and viral and partially treated bacterial meningitis.
- Bacterial antigen testing - may be useful if PCR is not available and the patient partially treated.
- India ink staining for cryptococcus.
Cerebrospinal fluid findings in specific scenarios[2, 3, 4]
Please note that the following are examples of results and often CSF results do not necessarily 'fit' into a standard set. Thus, CSF results should not be considered alone but in conjunction with history and examination findings and the results of other investigations. A good example of this is encephalitis - it is possible that the CSF is 'normal' (as defined below) but the clinical presentation and CT scan findings might be highly suggestive, in which case the diagnosis will most likely be encephalitis.
'Normal'
- Clear and colourless appearance.
- Protein level - 0.2-0.4 g/L (neonate <1.7 g/L).
- Glucose level - 60-80% of plasma glucose.
- WCC <5 per mm3 (higher in neonates up to 20 per mm3).
- No organisms.
- Opening pressure 10-20 cm H2O.
Bacterial meningitis
- Cloudy and turbid CSF (if severe).
- Raised protein >1.5 g/L.
- Glucose level is <50% of the plasma level.
- Cell count is high (>1,000 per mm3) and mostly neutrophils.
- May see organisms - eg, Gram-negative diplococci in Neisseria meningitidis.
- Opening pressure is usually high.
Viral/aseptic meningitis or encephalitis
- Clear CSF.
- Protein is raised or at the high end of normal.
- Glucose level is usually within normal limits (may be reduced in some cases of mumps and herpes simplex).
- Cell count is high and mostly lymphocytes.
- No organisms usually and PCR or special stains may be needed to identify cause.
- Opening pressure may or may not be raised.
Tuberculous meningitis
- Clear or slightly cloudy appearance (there may be cobweb-like stranding).
- Raised protein >1.5 g/L (much higher than bacterial meningitis).
- Glucose level is <50% of the plasma level.
- Cell count is high with a mixed pleocytosis and mainly lymphocytes.
- Opening pressure is usually raised but can be high normal.
- Negative PCR may help rule out TB quickly.
Subarachnoid haemorrhage
- Rarely, CSF is continuously blood-stained to the naked eye and, if subsequent analysis reveals an equal number of RBCs in all three samples, this indicates a subarachnoid haemorrhage.
- CSF should be examined for xanthochromia.
- Protein is raised or at the high end of normal.
- Glucose level is usually low.
- High number of RBCs.
- No organisms.
- Opening pressure is usually high if excessive RBCs are present.
See if you are eligible for a free NHS flu jab today.
Doherty CM, Forbes RB; Diagnostic Lumbar Puncture. Ulster Med J. 2014 May83(2):93-102.
Don't hesitate to transfer Five Nights at Sonic's Reimagined for complimentary here and check out your best to remain alive. Download this game for free, full version of this game is available on our website. Get it now by clicking the download button given below.
- Trials In Tainted Space Time Limited Edition
- Trials In Tainted Space Time Limits
- Trials In Tainted Space Time Limited
At Buffer, we've been using Kubernetes since 2016. We've been managing our k8s (kubernetes) cluster with kops, it has about 60 nodes (on AWS), and runs about 1500 containers. Our transition to a micro-service architecture has been full of trial and errors. Even after a few years running k8s, we are still learning its secrets. This post will talk about how something we thought was a good thing, but ended up to be not as great as we thought: CPU limits.
A play through of Trials in Tainted Space where I show the basics of the game mechanics, story, combat, and general information for new players. Limit bot activity to periods with less than 10k registered users online. Server Time: Jan 10, 2021 11:07 PM This website uses cookies to enhance your browsing experience. Haswell Quest Time Limit? Discussion in 'Trials in Tainted Space' started by Gammerick, Oct 4, 2016. Forums Adult Games Trials in Tainted Space.
CPU limits and Throttling
Trials In Tainted Space Time Limited Edition
It is s general recommendation to set CPU limits. Google, among others, highly recommends it. The danger of not setting a CPU limit is that containers running in the node could exhaust all CPU available. This can trigger a cascade of unwanted events such as having key Kubernetes processes (such as kubelet
) to become unresponsive. So it is in theory a great thing to set CPU limit in order to protect your nodes.
CPU limits is the maximum CPU time a container can uses at a given period (100ms by default). The CPU usage for a container will never go above that limit you specified. Kubernetes use a mechanism called CFS Quota to throttle the container to prevent the CPU usage from going above the limit. That means CPU will be artificially restricted, making the performance of your containers lower (and slower when it comes to latency).
What can happen if we don't set CPU limits?
We unfortunately experienced the issue. The kubelet
, a process running on every node, and in charge of managing the containers (pods) in the nodes became unresponsive. The node will turn into a NotReady
state, and containers (pods) that were present will be rescheduled somewhere else, and create the issue in the new nodes. Definitely not ideal isn't it?
Discovering the throttling and latency issue
A key metric to check when you are running container is the throttling
. This indicate the number of time your container has been throttled. Interestingly, we've discovered a lot of containers having throttling no matter if the CPU usage was near the limits or not. Here the example of one of our main API:
You can see in the animation that the CPU limits is set at 800m
(0.8 core, 80% of a core), and the peak usage is at most 200m
(20% of a core). Software free download for pc. After seeing, we might think we have plenty of CPU to let the service running before it throttle right? . Now check this one out:
You can notice the CPU throttling occurs, even though the CPU usage is below the CPU Limits. The maximum CPU usage isn't even near the CPU limits.
We've then found a few resources(github issue, Zalando talk, omio post) talking about how throttling lead to poorer performances and latency for your services.
Why do we see CPU throttling while CPU usage is low?The tldr is basically a bug in the Linux kernel throttling unecessarly containers with CPU limit. If you're curious about the nature of it, I invite you to check Dave Chiluk's great talk, a written version also exists with more details.
Removing CPU limit (with extra care)
After many long discussions, we've decided to remove the CPU limits for all services that were directly or indirectly on the critical path of our users.
This wasn't an easy decision since we value the stability of our cluster. We've experimented in the past some instability in our cluster with services using too much resources and disrupting all other services present in the same node. That time was a bit different, we understood more about how our services needed to be and had a good strategy to roll this out.
How to keep your nodes safe when removing limits ?
Isolating 'No CPU Limits' services:
In the past we've seen some nodes going to a notReady
state, mainly because some services were using too much resources in a node.
We've decided to put those services on some specific nodes (tainted nodes), so those services will not disrupt all the 'bounded' ones. We have better control and could identify easier if any issue occurs with a node. We did this by tainting some nodes and adding toleration to services that were 'unbounded'. Check the documentation on how you can do that.
Assigning the correct CPU and memory request:
The main worry we had was service using too much resources and leading to nodes becoming unresponsive. Because we now had great observability of all services running in our cluster (with Datadog), I've analyzed a few months of usage of each service we wanted to 'unbound'. I've assigned the maximum CPU usage as the CPU request with a > 20% margin. This will make sure to have allocated space in a node. If k8s won't try to schedule any other service in that node.
You can see in the graph that the peak CPU usage was 242m
CPU core (0.242 CPU core). Simply take this number and make it a bit higher to become the CPU request. You can notice that since the service is user facing, the peak CPU usage matches peak traffic time.
Do the same with your memory usage and requests, and you will be all set!To add more safety, you can use the horizontal pod autoscaler to create new pods if the resource usage is high, so kubernetes will schedule it in nodes that have room for it. Set an alert if your cluster do not have any room, or use the node austoscaler to add it automatically.
The downsides are that we lose in 'container density', the number of containers that can run in a single node. We could also end up with a lot of 'slack' during a low traffic time.You could also hit some high CPU usage, but nodes autoscaling should help you with it.
Results
I'm happy to publish really great results after few weeks of experimentation, we've already seen really great latency improvements across all the services we've modified:
The best result happened on our main landing page (buffer.com) where we speed the service up to 22x faster!
Is the Linux kernel bug fixed?
The bug has been fixed and merged into the kernel for Linux distribution running 4.19 or higher (kudo again to Dave Chiluk for finding and fixing that).
However, as for September 2nd 2020, when reading the kubernetes issue, we can see various Linux projects that keep referencing the issue, so I guess some Linux distribution still have the bug and working into integrating the fix.
If you are below a Linux distribution that has a kernel version below 4.19, I'd recommend you to upgrade to the latest Linux distribution for your nodes, but in any case, you should try removing the CPU limits and see if you have any throttling. Here a non exhausting list of various managed Kubernetes services or Linux distribution:
- Debian: The latest version buster has the fix, it looks quite recent (august 2020). Some previous version might have get patched
- Ubuntu: The latest version Ubuntu Focal Fosa 20.04 has the fix.
- EKS has the fix since December 2019. Upgrade your AMI if you have a version below than that
- kops: Since June 2020,
kops 1.18+
will start usingUbuntu 20.04
as the default host image. If you're using a lower version of kops, you'll have to probably to wait the fix. We are currently in this situation. - GKE (Goggle Cloud) : The kernel fix was merged in January 2020. But it does looks like throttling are still hapenning
ps: Feel free to comment if you have more precise information, I'll update the post accordingly
If the fix solved the throttling issue?
I'm unsure if totally solved the issue. I will give it a try once we hit a kernel version where the fix has been implemented and will update this post accordingly. If anyone have upgrade I'm keen to hear their results.
Takeaways
- If you run Docker containers under Linux (no matter Kubernetes/Mesos/Swarm) you might have your containers underperforming because of throttling
- Upgrade to the latest version of your distribution hoping the bug is fixed
- Removing CPU limit is a solution to solve this issue, but this is dangerous and should be made with extra-care (prefer upgrading your kernel first and monitor throttling first)
- If you remove CPU limits, carefully monitor CPU and memory usage in your nodes, and make sure your CPU requests are
- A safe way to is to use the Horizontal pod autoscaler to create new pods if the resource usage is high, so kubernetes will schedule it in nodes that have space.
👉Hacker news update: lot of insighful comments. I've updated the post to have better recommendations. You should prefer upgrading your kernel version over removing the CPU limits. But throttling will still be present. If your goal is low latency, remove CPU limits, but be really mindful when doing this: set the proper CPU requests, add the necessary monitoring when you do this. Read the comment written by Tim Hockin from Google (and one of Kubernetes creator)
I hope this post helps you get performance gains on the containers you are running. If so, don't hesitate to share or comment with always some insighful comments
Special thanks to Dmitry, Noah and Andre that adviced me on this.
Next reads:
👉 Why you should have a side project
👉 How we share technical knowledge in a remote team, across timezones
Professional Reference articles are designed for health professionals to use. They are written by UK doctors and based on research evidence, UK and European Guidelines. You may find the Lumbar Puncture (Spinal Tap) article more useful, or one of our other health articles.
Treatment of almost all medical conditions has been affected by the COVID-19 pandemic. NICE has issued rapid update guidelines in relation to many of these. This guidance is changing frequently. Please visit https://www.nice.org.uk/covid-19 to see if there is temporary guidance issued by NICE in relation to the management of this condition, which may vary from the information given below.
Cerebrospinal Fluid
In this articleCerebrospinal fluid (CSF) is found in the subarachnoid space of the brain (within the ventricles) and spinal canal. Delphi auto diagnose software download. It is produced by the choroid plexus in the ventricles of the brain and the cerebral vessels, at the rate of 500 ml/day. Production matches reabsorption so, at any one time in an adult, the average volume of CSF is about 150 ml.
Trending Articles
Indications for lumbar puncture[1, 2]
For information on performing a lumbar puncture and sampling, see the separate Lumbar Puncture article.
- To investigate or exclude meningitis: bacterial, viral, tuberculous, cryptococcal, chemical, carcinomatous.
- To exclude subarachnoid haemorrhage in acute severe headache.
- To investigate neurological disorders: multiple sclerosis, sarcoidosis, Guillain-Barré syndrome, chronic inflammatory demyelinating polyneuropathy, mitochondrial disorders, leukoencephalopathies, paraneoplastic syndromes.
- To demonstrate and manage disorders of intracranial pressure: idiopathic intracranial hypertension, spontaneous intracranial hypotension.
- To administer therapeutic or diagnostic agents: spinal anaesthesia, intrathecal chemotherapy, intrathecal antibiotics, intrathecal baclofen, contrast media in myelography or cisternography.
Analysis
It is helpful to note the appearance of CSF and the opening pressure (normal 10-20 cm H2O). Samples are usually sent for:
Biochemistry
- Protein - high (>0.4 g/L) levels seen in infection and infiltration disorders (falsely high results are seen if the sample is contaminated with blood). Highly elevated levels (>1 g/L) are seen in Guillain-Barré syndrome and tuberculous meningitis.
- Glucose - a blood sample for glucose should be taken at the same time as the lumbar puncture. CSF glucose is usually 60-80% of plasma glucose. A reduced level implies there is increased uptake of glucose in the CNS - eg, presence of micro-organisms.
Microscopy, culture and sensitivity
- Cell count - white cells with differential (neutrophils and lymphocytes) and red cells. When performing a lumbar puncture, red cells may be present as a result of damage to a blood vessel during the procedure (commonly called a 'bloody tap'). In these instances, the initial CSF is red but this is followed by clearer CSF.
- Gram stain - for bacterial organisms.
- Culture - if appropriate.
Additional investigations
- Xanthochromia - yellow appearance of centrifuged CSF resulting from red cell breakdown products, oxyhaemoglobin and bilirubin and representing a high likelihood of subarachnoid haemorrhage. This may be visualised by the naked eye but the use of spectrophotometry has superseded this. It is the last of three consecutively obtained samples which is examined.
- Oligoclonal bands - seen in multiple sclerosis and neurosyphilis.
- Virology.
- Cytology - requires larger volumes of CSF than other tests.
- Polymerase chain reaction (PCR) - eg, for tuberculosis (TB), and viral and partially treated bacterial meningitis.
- Bacterial antigen testing - may be useful if PCR is not available and the patient partially treated.
- India ink staining for cryptococcus.
Cerebrospinal fluid findings in specific scenarios[2, 3, 4]
Please note that the following are examples of results and often CSF results do not necessarily 'fit' into a standard set. Thus, CSF results should not be considered alone but in conjunction with history and examination findings and the results of other investigations. A good example of this is encephalitis - it is possible that the CSF is 'normal' (as defined below) but the clinical presentation and CT scan findings might be highly suggestive, in which case the diagnosis will most likely be encephalitis.
'Normal'
- Clear and colourless appearance.
- Protein level - 0.2-0.4 g/L (neonate <1.7 g/L).
- Glucose level - 60-80% of plasma glucose.
- WCC <5 per mm3 (higher in neonates up to 20 per mm3).
- No organisms.
- Opening pressure 10-20 cm H2O.
Bacterial meningitis
- Cloudy and turbid CSF (if severe).
- Raised protein >1.5 g/L.
- Glucose level is <50% of the plasma level.
- Cell count is high (>1,000 per mm3) and mostly neutrophils.
- May see organisms - eg, Gram-negative diplococci in Neisseria meningitidis.
- Opening pressure is usually high.
Viral/aseptic meningitis or encephalitis
- Clear CSF.
- Protein is raised or at the high end of normal.
- Glucose level is usually within normal limits (may be reduced in some cases of mumps and herpes simplex).
- Cell count is high and mostly lymphocytes.
- No organisms usually and PCR or special stains may be needed to identify cause.
- Opening pressure may or may not be raised.
Tuberculous meningitis
- Clear or slightly cloudy appearance (there may be cobweb-like stranding).
- Raised protein >1.5 g/L (much higher than bacterial meningitis).
- Glucose level is <50% of the plasma level.
- Cell count is high with a mixed pleocytosis and mainly lymphocytes.
- Opening pressure is usually raised but can be high normal.
- Negative PCR may help rule out TB quickly.
Subarachnoid haemorrhage
- Rarely, CSF is continuously blood-stained to the naked eye and, if subsequent analysis reveals an equal number of RBCs in all three samples, this indicates a subarachnoid haemorrhage.
- CSF should be examined for xanthochromia.
- Protein is raised or at the high end of normal.
- Glucose level is usually low.
- High number of RBCs.
- No organisms.
- Opening pressure is usually high if excessive RBCs are present.
See if you are eligible for a free NHS flu jab today.
Doherty CM, Forbes RB; Diagnostic Lumbar Puncture. Ulster Med J. 2014 May83(2):93-102.
Don't hesitate to transfer Five Nights at Sonic's Reimagined for complimentary here and check out your best to remain alive. Download this game for free, full version of this game is available on our website. Get it now by clicking the download button given below.
Majed B, Zephir H, Pichonnier-Cassagne V, et al; Lumbar punctures: use and diagnostic efficiency in emergency medical departments. Int J Emerg Med. 2009 Nov 192(4):227-35. doi: 10.1007/s12245-009-0128-5.
Related Information
Hi All, I dont know if anyone has come across this. I have a worm infestation in my face. They travel around under the skin leaving tracks and bursting holes into my skin. they create glass like..
Health ToolsTrials In Tainted Space Time Limits
Feeling unwell?
Trials In Tainted Space Time Limited
Assess your symptoms online with our free symptom checker.