Results for topic "monitoring":

Monitor server and services

… Learn how to monitor the performance of censhare servers and services and how to trace errors. icon related-topics Optionally, assign a Font Awesome icon to the …
https://documentation.censhare.com/censhare-2021/en/administrator-guide/operation/monitor-server-and-services

Performance analysis of censhare Server

… How to perform a censhare Server basic performance analysis (ca. 15-60 min.) for all logged-in users up to the case that no new client login is possible. Import< 5.x ): "RangeIndexHelper.java": should not be found at all, if found see here: jstack-analyse.jpeg 5 // Check active censhare Server commands Log into the censhare Admin Client. Go to Status | Commands, open it, sort by column State and take a screenshot of it (#analysis data). Go to Status | Commands, open it, sort by column Queue and take a screenshot of it (#analysis data). Check if there is only one command, double click it to get the description name of the module Clue: More active commands and commands in the queue could be an indicator for a performance issue. Admin-Client-Aktive-Kommandos.jpeg 6 // Check censhare diagrams Log into the censhare Admin Client. Go to Status | Diagrams, open it and take a screenshot of it (#analysis data). Check if there are peaks (needs some experience). Clue: Peaks can be of two types. The line goes up above normal and comes down after some time. This shows there was a problem, but the censhare Server has recovered. If the line goes up above normal and continues to be there, it shows that the problem still exists and may require a server restart. Save analysis data before a restart if possible. Otherwise, further analysis of the root cause is not possible. 7 // Add (#analysis data) to the ticket 8 // Check if reported slowness is reproducible on the system If the reported info is insufficient, ask for an asset ID and exactly the steps to confirm the slowness. Maybe it is only sporadically reproducible. How to perform a censhare Server basic performance analysis (ca. 15-60 min.) for all logged-in users up to the case that no new client login is possible. Important hints In case of performance problems, always perform these checks as a first step. Execute these checks on the application server where the performance issue appears. A censhare Server performance problem could also lead to a censhare Client login problem. Always save and attach analysis data (marked as #analysis data within the text) before a server restart. Otherwise, further analysis of the root cause is not possible. Basic checks Perform the following basic checks before you move on with a deeper analysis. 0 // Check for OutOfMemory Login via SSH. Execute the following unix command(s) to see if the server got an OOM error. ls -ltr /opt/corpus/*.hprof grep OutOfMemory ~/work/logs/server-0.*.log If a heap dump file with the current timestamp exists and its file size no longer increases, the jvm process has an OOM error and the creation (-XX:+HeapDumpOnOutOfMemoryError) of the heap dump file is done. You need to restart the censhare Application Server (not hardware) to solve the incident. For further analysis, check if the server has enough JVM -Xms / -Xmx allocated. If yes, server-0.*.logs and heap dump (*hprof) file need to be transferred to request a heap dump analysis. 1 // Check the system load Login via SSH. Execute the Unix top command and watch it for a few seconds. Execute the ./ For more information, see jvmtop.sh https://ecosphere.censhare.com/en/help-and-customer-service/article/2589558 or top -H command and watch it a few seconds. Take a few screenshots of the top command (#analysis data). Take a few screenshots of the jvmtop/top -H command (#analysis data) top: Check if censhare Server (java) process has a high load (permanent 100%). jvmtop: Check if there are censhare Server threads which have a high load. See below and there may be systems where 100% load means all cores and some systems where 100% means one core. In the later case, 250% for a 4 core system would be still OK. Check if the whole system has a high load (values like 1-2 are normal, depending on CPUs). A load of one means that one core is used for 100%, a bigger value means that processes are waiting for execution if only one core is available. So a load of 3 for a 4 core system would be still OK Clue: If there are multiple java processes, use the Unix jps command to identify the PID of the censhare Server java process. 2 // Check the censhare garbage collection log files Download log files with Unix command cssgetlogs (2459174). cssgetlogs is a censhare internal Unix script. Partners/customers can use SCP to download the log files (#analysis data). The following grep examples apply when using "throughput GC" (Parallel GC/Parallel Old GC), which performs a Stop-the-World when the memory is full. It does not apply when using a concurrent collector such as Garbage First (G1). Graphical visualization & insightful metrics of G1 GC logs can be analyzed with a GC log analysis tool, for example http://gceasy.io/ https://blog.gceasy.io/2016/07/07/understanding-g1-gc-log-format/. Login via SSH Execute the Unix lgl command (go to the end of the log file) and watch it for a few seconds. Check for high "Full GC" times and/or frequent intervals (good would be a Full GC every hour with a duration of max. 10 seconds, but it depends on the system. A Full GC means a stop of the censhare process. Therefore these stops should be short and rare. Stops of less than 3 seconds and only every 3 minutes or even longer are perfect. Check that the garbage collection actually does its job: ParOldGen: 292282K->< 5.x ): "RangeIndexHelper.java": should not be found at all, if found see here: jstack-analyse.jpeg 5 // Check active censhare Server commands Log into the censhare Admin Client. Go to Status | Commands, open it, sort by column State and take a screenshot of it (#analysis data). Go to Status | Commands, open it, sort by column Queue and take a screenshot of it (#analysis data). Check if there is only one command, double click it to get the description name of the module Clue: More active commands and commands in the queue could be an indicator for a performance issue. Admin-Client-Aktive-Kommandos.jpeg 6 // Check censhare diagrams Log into the censhare Admin Client. Go to Status | Diagrams, open it and take a screenshot of it (#analysis data). Check if there are peaks (needs some experience). Clue: Peaks can be of two types. The line goes up above normal and comes down after some time. This shows there was a problem, but the censhare Server has recovered. If the line goes up above normal and continues to be there, it shows that the problem still exists and may require a server restart. Save analysis data before a restart if possible. Otherwise, further analysis of the root cause is not possible. 7 // Add (#analysis data) to the ticket 8 // Check if reported slowness is reproducible on the system If the reported info is insufficient, ask for an asset ID and exactly the steps to confirm the slowness. Maybe it is only sporadically reproducible. How to perform a censhare Server basic performance analysis (ca. 15-60 min.) for all logged-in users up to the case that no new client login is possible. Important hints In case of performance problems, always perform these checks as a first step. Execute these checks on the application server where the performance issue appears. A censhare Server performance problem could also lead to a censhare Client login problem. Always save and attach analysis data (marked as #analysis data within the text) before a server restart. Otherwise, further analysis of the root cause is not possible. Basic checks Perform the following basic checks before you move on with a deeper analysis. 0 // Check for OutOfMemory Login via SSH. Execute the following unix command(s) to see if the server got an OOM error. ls -ltr /opt/corpus/*.hprof grep OutOfMemory ~/work/logs/server-0.*.log If a heap dump file with the current timestamp exists and its file size no longer increases, the jvm process has an OOM error and the creation (-XX:+HeapDumpOnOutOfMemoryError) of the heap dump file is done. You need to restart the censhare Application Server (not hardware) to solve the incident. For further analysis, check if the server has enough JVM -Xms / -Xmx allocated. If yes, server-0.*.logs and heap dump (*hprof) file need to be transferred to request a heap dump analysis. 1 // Check the system load Login via SSH. Execute the Unix top command and watch it for a few seconds. Execute the ./ For more information, see jvmtop.sh https://ecosphere.censhare.com/en/help-and-customer-service/article/2589558 or top -H command and watch it a few seconds. Take a few screenshots of the top command (#analysis data). Take a few screenshots of the jvmtop/top -H command (#analysis data) top: Check if censhare Server (java) process has a high load (permanent 100%). jvmtop: Check if there are censhare Server threads which have a high load. See below and there may be systems where 100% load means all cores and some systems where 100% means one core. In the later case, 250% for a 4 core system would be still OK. Check if the whole system has a high load (values like 1-2 are normal, depending on CPUs). A load of one means that one core is used for 100%, a bigger value means that processes are waiting for execution if only one core is available. So a load of 3 for a 4 core system would be still OK Clue: If there are multiple java processes, use the Unix jps command to identify the PID of the censhare Server java process. 2 // Check the censhare garbage collection log files Download log files with Unix command cssgetlogs (2459174). cssgetlogs is a censhare internal Unix script. Partners/customers can use SCP to download the log files (#analysis data). The following grep examples apply when using "throughput GC" (Parallel GC/Parallel Old GC), which performs a Stop-the-World when the memory is full. It does not apply when using a concurrent collector such as Garbage First (G1). Graphical visualization & insightful metrics of G1 GC logs can be analyzed with a GC log analysis tool, for example http://gceasy.io/ https://blog.gceasy.io/2016/07/07/understanding-g1-gc-log-format/. Login via SSH Execute the Unix lgl command (go to the end of the log file) and watch it for a few seconds. Check for high "Full GC" times and/or frequent intervals (good would be a Full GC every hour with a duration of max. 10 seconds, but it depends on the system. A Full GC means a stop of the censhare process. Therefore these stops should be short and rare. Stops of less than 3 seconds and only every 3 minutes or even longer are perfect. Check that the garbage collection actually does its job: ParOldGen: 292282K->< 5.x ): "RangeIndexHelper.java": should not be found at all, if found see here: jstack-analyse.jpeg 5 // Check active censhare Server commands Log into the censhare Admin Client. Go to Status | Commands, open it, sort by column State and take a screenshot of it (#analysis data). Go to Status | Commands, open it, sort by column Queue and take a screenshot of it (#analysis data). Check if there is only one command, double click it to get the description name of the module Clue: More active commands and commands in the queue could be an indicator for a performance issue. Admin-Client-Aktive-Kommandos.jpeg 6 // Check censhare diagrams Log into the censhare Admin Client. Go to Status | Diagrams, open it and take a screenshot of it (#analysis data). Check if there are peaks (needs some experience). Clue: Peaks can be of two types. The line goes up above normal and comes down after some time. This shows there was a problem, but the censhare Server has recovered. If the line goes up above normal and continues to be there, it shows that the problem still exists and may require a server restart. Save analysis data before a restart if possible. Otherwise, further analysis of the root cause is not possible. 7 // Add (#analysis data) to the ticket 8 // Check if reported slowness is reproducible on the system If the reported info is insufficient, ask for an asset ID and exactly the steps to confirm the slowness. Maybe it is only sporadically reproducible. How to perform a censhare Server basic performance analysis (ca. 15-60 min.) for all logged-in users up to the case that no new client login is possible. Important hints In case of performance problems, always perform these checks as a first step. Execute these checks on the application server where the performance issue appears. A censhare Server performance problem could also lead to a censhare Client login problem. Always save and attach analysis data (marked as #analysis data within the text) before a server restart. Otherwise, further analysis of the root cause is not possible. Basic checks Perform the following basic checks before you move on with a deeper analysis. 0 // Check for OutOfMemory Login via SSH. Execute the following unix command(s) to see if the server got an OOM error. ls -ltr /opt/corpus/*.hprof grep OutOfMemory ~/work/logs/server-0.*.log If a heap dump file with the current timestamp exists and its file size no longer increases, the jvm process has an OOM error and the creation (-XX:+HeapDumpOnOutOfMemoryError) of the heap dump file is done. You need to restart the censhare Application Server (not hardware) to solve the incident. For further analysis, check if the server has enough JVM -Xms / -Xmx allocated. If yes, server-0.*.logs and heap dump (*hprof) file need to be transferred to request a heap dump analysis. 1 // Check the system load Login via SSH. Execute the Unix top command and watch it for a few seconds. Execute the ./ For more information, see jvmtop.sh https://ecosphere.censhare.com/en/help-and-customer-service/article/2589558 or top -H command and watch it a few seconds. Take a few screenshots of the top command (#analysis data). Take a few screenshots of the jvmtop/top -H command (#analysis data) top: Check if censhare Server (java) process has a high load (permanent 100%). jvmtop: Check if there are censhare Server threads which have a high load. See below and there may be systems where 100% load means all cores and some systems where 100% means one core. In the later case, 250% for a 4 core system would be still OK. Check if the whole system has a high load (values like 1-2 are normal, depending on CPUs). A load of one means that one core is used for 100%, a bigger value means that processes are waiting for execution if only one core is available. So a load of 3 for a 4 core system would be still OK Clue: If there are multiple java processes, use the Unix jps command to identify the PID of the censhare Server java process. 2 // Check the censhare garbage collection log files Download log files with Unix command cssgetlogs (2459174). cssgetlogs is a censhare internal Unix script. Partners/customers can use SCP to download the log files (#analysis data). The following grep examples apply when using "throughput GC" (Parallel GC/Parallel Old GC), which performs a Stop-the-World when the memory is full. It does not apply when using a concurrent collector such as Garbage First (G1). Graphical visualization & insightful metrics of G1 GC logs can be analyzed with a GC log analysis tool, for example http://gceasy.io/ https://blog.gceasy.io/2016/07/07/understanding-g1-gc-log-format/. Login via SSH Execute the Unix lgl command (go to the end of the log file) and watch it for a few seconds. Check for high "Full GC" times and/or frequent intervals (good would be a Full GC every hour with a duration of max. 10 seconds, but it depends on the system. A Full GC means a stop of the censhare process. Therefore these stops should be short and rare. Stops of less than 3 seconds and only every 3 minutes or even longer are perfect. Check that the garbage collection actually does its job: ParOldGen: 292282K-> …
https://documentation.censhare.com/censhare-2021/en/administrator-guide/operation/monitor-server-and-services/performance-analysis-of-censhare-server

Improve performance for remote asset events

… With large numbers of remote asset events and slow network connections, the performance of the master server can be impacted. If processing the remote asset eve …
https://documentation.censhare.com/censhare-2021/en/administrator-guide/operation/monitor-server-and-services/improve-performance-for-remote-asset-events

RAM allocation for censhare Server

… Monitor the RAM usage of the allocated memory. Configuration practices Diagrams-JVM-memory.jpeg Considerations when increasing the allocated RAM for the censhar
https://documentation.censhare.com/censhare-2021/en/administrator-guide/operation/monitor-server-and-services/ram-allocation-for-censhare-server

Limits for censhare Server logfiles

… How to change the censhare server logging limits and where to find the logfiles. Changing the logfile values can increase the required disk space. Check that th …
https://documentation.censhare.com/censhare-2021/en/administrator-guide/operation/monitor-server-and-services/limits-for-censhare-server-logfiles

Manage censhare services with rccss

… Use the basic tool rccss to start, stop or query the status of censhare services in a censhare environment. Install rccss Get rccss-install.tar.gz Copy the rccs
https://documentation.censhare.com/censhare-2021/en/administrator-guide/operation/monitor-server-and-services/manage-censhare-services-with-rccss

Optimize monitoring on Windows - Problem Reporting

… The standard setup of Windows Problem Reporting prevents to monitor crashed applications. Learn how to deactivate the option. The applications stay open because …
https://documentation.censhare.com/censhare-2021/en/administrator-guide/operation/monitor-server-and-services/optimize-monitoring-on-windows-problem-reporting

Check for memory leaks

… How to troubleshoot server performance problems that are caused by memory leaks. Introduction If the execution time of your server application significantly inc
https://documentation.censhare.com/censhare-2021/en/administrator-guide/operation/monitor-server-and-services/check-for-memory-leaks

Monitor censhare Client UI thread

… Slow UI performance in censhare Client when users work remotely and what to do about it. Problem Users work remotely and complain about slow UI performance or i
https://documentation.censhare.com/censhare-2021/en/administrator-guide/operation/client-logging/monitor-censhare-client-ui-thread

Monitor censhare with 3rd party tool Nagios

… Monitor censhare server and network with the open-source solution Nagios or any other monitoring framework using the Nagios/Icinga Plugin API such as Sensu, Nae

Analyse database connections

… Learn about the best practices to analyze database connections. Configuration practices databaseconnections.jpeg Maximum open database connections to configure <0x000000046623c4c8><0x000000046623c408><0x000000046623c4c8><0x000000046623c4c8><0x000000046623c4c8><0x000000046623c408><0x000000046623c4c8><0x000000046623c4c8><0x000000046623c4c8><0x000000046623c408><0x000000046623c4c8><0x000000046623c4c8><0x000000046623c4c8><0x000000046623c408><0x000000046623c4c8><0x000000046623c4c8> …
https://documentation.censhare.com/censhare-2021/en/administrator-guide/operation/monitor-database-performamce/analyse-database-connections

Vacuum process for PostgreSQL databases

… Database vacuuming is a way to increase the table and database performance of a PostgreSQL database. Learn how to use the vacuum process to clean the database.
https://documentation.censhare.com/censhare-2021/en/administrator-guide/operation/monitor-database-performamce/vacuum-process-for-postgresql-databases

PostgreSQL database performance check

… Learn about the performance of a PostgreSQL database. Configure PostgreSQL for pretty good performance Every PostgreSQL database has a default configuration but<< EOF \pset format wrapped; \pset linestyle unicod; \pset columns 180; Select 'select pg_terminate_backend('||pid||') from pg_stat_activity;' kill_query,usename,state,client_addr,query,query_start From pg_stat_activity where datname='corpus' AND pid <>< current_timestamp - INTERVAL '30' MINUTE; EOF After running the above code, it provides "proc_id's" with other information in the "kill_query" column section and terminate the idle sessions by the following command. Run from "corpus" user to connect with "corpus" database. select pg_terminate_backend(<>

<< EOF \pset format wrapped; \pset linestyle unicod; \pset columns 180; Select 'select pg_terminate_backend('||pid||') from pg_stat_activity;' kill_query,usename,state,client_addr,query,query_start From pg_stat_activity where datname='corpus' AND pid <>< current_timestamp - INTERVAL '30' MINUTE; EOF After running the above code, it provides "proc_id's" with other information in the "kill_query" column section and terminate the idle sessions by the following command. Run from "corpus" user to connect with "corpus" database. select pg_terminate_backend(<>
<< EOF \pset format wrapped; \pset linestyle unicod; \pset columns 180; Select 'select pg_terminate_backend('||pid||') from pg_stat_activity;' kill_query,usename,state,client_addr,query,query_start From pg_stat_activity where datname='corpus' AND pid <>< current_timestamp - INTERVAL '30' MINUTE; EOF After running the above code, it provides "proc_id's" with other information in the "kill_query" column section and terminate the idle sessions by the following command. Run from "corpus" user to connect with "corpus" database. select pg_terminate_backend(<>
<< EOF \pset format wrapped; \pset linestyle unicod; \pset columns 180; Select 'select pg_terminate_backend('||pid||') from pg_stat_activity;' kill_query,usename,state,client_addr,query,query_start From pg_stat_activity where datname='corpus' AND pid <>< current_timestamp - INTERVAL '30' MINUTE; EOF After running the above code, it provides "proc_id's" with other information in the "kill_query" column section and terminate the idle sessions by the following command. Run from "corpus" user to connect with "corpus" database. select pg_terminate_backend(<>

https://documentation.censhare.com/censhare-2021/en/administrator-guide/operation/monitor-database-performamce/postgresql-database-performance-check

Optimize CDB - adjust feature indexes

… Adjust the indexes in the censhare database (CDB). Use config.xml (AssetStore). Skip arbitrary features in the index You can switch off the indexing for certain
https://documentation.censhare.com/censhare-2021/en/administrator-guide/operation/monitor-database-performamce/optimize-cdb-adjust-feature-indexes

Oracle database performance checks (Admin Client)

… The censhare Admin Client offers a built-in performance analysis tools for the connected Oracle database. Context The check is located in censhare-Server/app/mo
https://documentation.censhare.com/censhare-2021/en/administrator-guide/operation/monitor-database-performamce/oracle-database-performance-checks-admin-client