Results for topic "system-administrator":

Import Oracle database

… Execution time: 5 minutes + copy dumpfile time + import wait time = 20-60 minutes for a standard censhare database. Prerequisites ssh access to the database ser<<<<
https://documentation.censhare.com/censhare-2021/en/administrator-guide/setup/database-setup/import-oracle-database

Delete user passwords

… This server action must not be used in production systems. This server action is disabled by default. If it was enabled in your previous installation, it will b …
https://documentation.censhare.com/censhare-2021/en/administrator-guide/user-management/manage-user-accounts/delete-user-passwords

Support for Adobe PSB MIME type

… PSB is used for image files with up to 300000 pixels width or height. About PSB files The standard PSD only allows an image file size of maximum 2 GB on Mac or …
https://documentation.censhare.com/censhare-2021/en/administrator-guide/media-and-content-management/image-management/support-for-adobe-psb-mime-type

Video previews

… censhare Web and censhare Java Client show preview videos in certain formats. The format depends on factors such as operating system, web browser, or the used v …
https://documentation.censhare.com/censhare-2021/en/administrator-guide/media-and-content-management/video-management/video-previews

FFmpeg presets

… Learn about the presets for MP4, H.264, AAC video encoding. The presets below are used for all videos on the censhare website http://www.censhare.com/ and seem
https://documentation.censhare.com/censhare-2021/en/administrator-guide/media-and-content-management/video-management/ffmpeg-presets

Configure translation of asset properties with memory

… Important: After saving the article, add labels for role, topic and super-topic (if applicable). Step-by-step configuration guide to translate asset properties
https://documentation.censhare.com/censhare-2021/en/administrator-guide/translation-management/set-up-translation-localization/configure-translation-of-asset-properties-with-memory

Set up Translation with Memory

… The application Translation with Memory in censhare Web requires configuration. We take you step-by-step through the required configuration. Context Translation …
https://documentation.censhare.com/censhare-2021/en/administrator-guide/translation-management/set-up-translation-localization/set-up-translation-with-memory

XLIFF 1.2 support

… Important: After saving the article, add labels for role, topic and super-topic (if applicable). XLIFF is an XML standard for the data exchange between translat …
https://documentation.censhare.com/censhare-2021/en/administrator-guide/translation-management/share-translations/xliff-1-2-support

Operation

… Information on monitoring and controlling censhare server and services. Backup and restore the databases and increase performance. Information on managing the n …
https://documentation.censhare.com/censhare-2021/en/administrator-guide/operation

Performance analysis of censhare Server

… How to perform a censhare Server basic performance analysis (ca. 15-60 min.) for all logged-in users up to the case that no new client login is possible. Import< 5.x ): "RangeIndexHelper.java": should not be found at all, if found see here: jstack-analyse.jpeg 5 // Check active censhare Server commands Log into the censhare Admin Client. Go to Status | Commands, open it, sort by column State and take a screenshot of it (#analysis data). Go to Status | Commands, open it, sort by column Queue and take a screenshot of it (#analysis data). Check if there is only one command, double click it to get the description name of the module Clue: More active commands and commands in the queue could be an indicator for a performance issue. Admin-Client-Aktive-Kommandos.jpeg 6 // Check censhare diagrams Log into the censhare Admin Client. Go to Status | Diagrams, open it and take a screenshot of it (#analysis data). Check if there are peaks (needs some experience). Clue: Peaks can be of two types. The line goes up above normal and comes down after some time. This shows there was a problem, but the censhare Server has recovered. If the line goes up above normal and continues to be there, it shows that the problem still exists and may require a server restart. Save analysis data before a restart if possible. Otherwise, further analysis of the root cause is not possible. 7 // Add (#analysis data) to the ticket 8 // Check if reported slowness is reproducible on the system If the reported info is insufficient, ask for an asset ID and exactly the steps to confirm the slowness. Maybe it is only sporadically reproducible. How to perform a censhare Server basic performance analysis (ca. 15-60 min.) for all logged-in users up to the case that no new client login is possible. Important hints In case of performance problems, always perform these checks as a first step. Execute these checks on the application server where the performance issue appears. A censhare Server performance problem could also lead to a censhare Client login problem. Always save and attach analysis data (marked as #analysis data within the text) before a server restart. Otherwise, further analysis of the root cause is not possible. Basic checks Perform the following basic checks before you move on with a deeper analysis. 0 // Check for OutOfMemory Login via SSH. Execute the following unix command(s) to see if the server got an OOM error. ls -ltr /opt/corpus/*.hprof grep OutOfMemory ~/work/logs/server-0.*.log If a heap dump file with the current timestamp exists and its file size no longer increases, the jvm process has an OOM error and the creation (-XX:+HeapDumpOnOutOfMemoryError) of the heap dump file is done. You need to restart the censhare Application Server (not hardware) to solve the incident. For further analysis, check if the server has enough JVM -Xms / -Xmx allocated. If yes, server-0.*.logs and heap dump (*hprof) file need to be transferred to request a heap dump analysis. 1 // Check the system load Login via SSH. Execute the Unix top command and watch it for a few seconds. Execute the ./ For more information, see jvmtop.sh https://ecosphere.censhare.com/en/help-and-customer-service/article/2589558 or top -H command and watch it a few seconds. Take a few screenshots of the top command (#analysis data). Take a few screenshots of the jvmtop/top -H command (#analysis data) top: Check if censhare Server (java) process has a high load (permanent 100%). jvmtop: Check if there are censhare Server threads which have a high load. See below and there may be systems where 100% load means all cores and some systems where 100% means one core. In the later case, 250% for a 4 core system would be still OK. Check if the whole system has a high load (values like 1-2 are normal, depending on CPUs). A load of one means that one core is used for 100%, a bigger value means that processes are waiting for execution if only one core is available. So a load of 3 for a 4 core system would be still OK Clue: If there are multiple java processes, use the Unix jps command to identify the PID of the censhare Server java process. 2 // Check the censhare garbage collection log files Download log files with Unix command cssgetlogs (2459174). cssgetlogs is a censhare internal Unix script. Partners/customers can use SCP to download the log files (#analysis data). The following grep examples apply when using "throughput GC" (Parallel GC/Parallel Old GC), which performs a Stop-the-World when the memory is full. It does not apply when using a concurrent collector such as Garbage First (G1). Graphical visualization & insightful metrics of G1 GC logs can be analyzed with a GC log analysis tool, for example http://gceasy.io/ https://blog.gceasy.io/2016/07/07/understanding-g1-gc-log-format/. Login via SSH Execute the Unix lgl command (go to the end of the log file) and watch it for a few seconds. Check for high "Full GC" times and/or frequent intervals (good would be a Full GC every hour with a duration of max. 10 seconds, but it depends on the system. A Full GC means a stop of the censhare process. Therefore these stops should be short and rare. Stops of less than 3 seconds and only every 3 minutes or even longer are perfect. Check that the garbage collection actually does its job: ParOldGen: 292282K->< 5.x ): "RangeIndexHelper.java": should not be found at all, if found see here: jstack-analyse.jpeg 5 // Check active censhare Server commands Log into the censhare Admin Client. Go to Status | Commands, open it, sort by column State and take a screenshot of it (#analysis data). Go to Status | Commands, open it, sort by column Queue and take a screenshot of it (#analysis data). Check if there is only one command, double click it to get the description name of the module Clue: More active commands and commands in the queue could be an indicator for a performance issue. Admin-Client-Aktive-Kommandos.jpeg 6 // Check censhare diagrams Log into the censhare Admin Client. Go to Status | Diagrams, open it and take a screenshot of it (#analysis data). Check if there are peaks (needs some experience). Clue: Peaks can be of two types. The line goes up above normal and comes down after some time. This shows there was a problem, but the censhare Server has recovered. If the line goes up above normal and continues to be there, it shows that the problem still exists and may require a server restart. Save analysis data before a restart if possible. Otherwise, further analysis of the root cause is not possible. 7 // Add (#analysis data) to the ticket 8 // Check if reported slowness is reproducible on the system If the reported info is insufficient, ask for an asset ID and exactly the steps to confirm the slowness. Maybe it is only sporadically reproducible. How to perform a censhare Server basic performance analysis (ca. 15-60 min.) for all logged-in users up to the case that no new client login is possible. Important hints In case of performance problems, always perform these checks as a first step. Execute these checks on the application server where the performance issue appears. A censhare Server performance problem could also lead to a censhare Client login problem. Always save and attach analysis data (marked as #analysis data within the text) before a server restart. Otherwise, further analysis of the root cause is not possible. Basic checks Perform the following basic checks before you move on with a deeper analysis. 0 // Check for OutOfMemory Login via SSH. Execute the following unix command(s) to see if the server got an OOM error. ls -ltr /opt/corpus/*.hprof grep OutOfMemory ~/work/logs/server-0.*.log If a heap dump file with the current timestamp exists and its file size no longer increases, the jvm process has an OOM error and the creation (-XX:+HeapDumpOnOutOfMemoryError) of the heap dump file is done. You need to restart the censhare Application Server (not hardware) to solve the incident. For further analysis, check if the server has enough JVM -Xms / -Xmx allocated. If yes, server-0.*.logs and heap dump (*hprof) file need to be transferred to request a heap dump analysis. 1 // Check the system load Login via SSH. Execute the Unix top command and watch it for a few seconds. Execute the ./ For more information, see jvmtop.sh https://ecosphere.censhare.com/en/help-and-customer-service/article/2589558 or top -H command and watch it a few seconds. Take a few screenshots of the top command (#analysis data). Take a few screenshots of the jvmtop/top -H command (#analysis data) top: Check if censhare Server (java) process has a high load (permanent 100%). jvmtop: Check if there are censhare Server threads which have a high load. See below and there may be systems where 100% load means all cores and some systems where 100% means one core. In the later case, 250% for a 4 core system would be still OK. Check if the whole system has a high load (values like 1-2 are normal, depending on CPUs). A load of one means that one core is used for 100%, a bigger value means that processes are waiting for execution if only one core is available. So a load of 3 for a 4 core system would be still OK Clue: If there are multiple java processes, use the Unix jps command to identify the PID of the censhare Server java process. 2 // Check the censhare garbage collection log files Download log files with Unix command cssgetlogs (2459174). cssgetlogs is a censhare internal Unix script. Partners/customers can use SCP to download the log files (#analysis data). The following grep examples apply when using "throughput GC" (Parallel GC/Parallel Old GC), which performs a Stop-the-World when the memory is full. It does not apply when using a concurrent collector such as Garbage First (G1). Graphical visualization & insightful metrics of G1 GC logs can be analyzed with a GC log analysis tool, for example http://gceasy.io/ https://blog.gceasy.io/2016/07/07/understanding-g1-gc-log-format/. Login via SSH Execute the Unix lgl command (go to the end of the log file) and watch it for a few seconds. Check for high "Full GC" times and/or frequent intervals (good would be a Full GC every hour with a duration of max. 10 seconds, but it depends on the system. A Full GC means a stop of the censhare process. Therefore these stops should be short and rare. Stops of less than 3 seconds and only every 3 minutes or even longer are perfect. Check that the garbage collection actually does its job: ParOldGen: 292282K->< 5.x ): "RangeIndexHelper.java": should not be found at all, if found see here: jstack-analyse.jpeg 5 // Check active censhare Server commands Log into the censhare Admin Client. Go to Status | Commands, open it, sort by column State and take a screenshot of it (#analysis data). Go to Status | Commands, open it, sort by column Queue and take a screenshot of it (#analysis data). Check if there is only one command, double click it to get the description name of the module Clue: More active commands and commands in the queue could be an indicator for a performance issue. Admin-Client-Aktive-Kommandos.jpeg 6 // Check censhare diagrams Log into the censhare Admin Client. Go to Status | Diagrams, open it and take a screenshot of it (#analysis data). Check if there are peaks (needs some experience). Clue: Peaks can be of two types. The line goes up above normal and comes down after some time. This shows there was a problem, but the censhare Server has recovered. If the line goes up above normal and continues to be there, it shows that the problem still exists and may require a server restart. Save analysis data before a restart if possible. Otherwise, further analysis of the root cause is not possible. 7 // Add (#analysis data) to the ticket 8 // Check if reported slowness is reproducible on the system If the reported info is insufficient, ask for an asset ID and exactly the steps to confirm the slowness. Maybe it is only sporadically reproducible. How to perform a censhare Server basic performance analysis (ca. 15-60 min.) for all logged-in users up to the case that no new client login is possible. Important hints In case of performance problems, always perform these checks as a first step. Execute these checks on the application server where the performance issue appears. A censhare Server performance problem could also lead to a censhare Client login problem. Always save and attach analysis data (marked as #analysis data within the text) before a server restart. Otherwise, further analysis of the root cause is not possible. Basic checks Perform the following basic checks before you move on with a deeper analysis. 0 // Check for OutOfMemory Login via SSH. Execute the following unix command(s) to see if the server got an OOM error. ls -ltr /opt/corpus/*.hprof grep OutOfMemory ~/work/logs/server-0.*.log If a heap dump file with the current timestamp exists and its file size no longer increases, the jvm process has an OOM error and the creation (-XX:+HeapDumpOnOutOfMemoryError) of the heap dump file is done. You need to restart the censhare Application Server (not hardware) to solve the incident. For further analysis, check if the server has enough JVM -Xms / -Xmx allocated. If yes, server-0.*.logs and heap dump (*hprof) file need to be transferred to request a heap dump analysis. 1 // Check the system load Login via SSH. Execute the Unix top command and watch it for a few seconds. Execute the ./ For more information, see jvmtop.sh https://ecosphere.censhare.com/en/help-and-customer-service/article/2589558 or top -H command and watch it a few seconds. Take a few screenshots of the top command (#analysis data). Take a few screenshots of the jvmtop/top -H command (#analysis data) top: Check if censhare Server (java) process has a high load (permanent 100%). jvmtop: Check if there are censhare Server threads which have a high load. See below and there may be systems where 100% load means all cores and some systems where 100% means one core. In the later case, 250% for a 4 core system would be still OK. Check if the whole system has a high load (values like 1-2 are normal, depending on CPUs). A load of one means that one core is used for 100%, a bigger value means that processes are waiting for execution if only one core is available. So a load of 3 for a 4 core system would be still OK Clue: If there are multiple java processes, use the Unix jps command to identify the PID of the censhare Server java process. 2 // Check the censhare garbage collection log files Download log files with Unix command cssgetlogs (2459174). cssgetlogs is a censhare internal Unix script. Partners/customers can use SCP to download the log files (#analysis data). The following grep examples apply when using "throughput GC" (Parallel GC/Parallel Old GC), which performs a Stop-the-World when the memory is full. It does not apply when using a concurrent collector such as Garbage First (G1). Graphical visualization & insightful metrics of G1 GC logs can be analyzed with a GC log analysis tool, for example http://gceasy.io/ https://blog.gceasy.io/2016/07/07/understanding-g1-gc-log-format/. Login via SSH Execute the Unix lgl command (go to the end of the log file) and watch it for a few seconds. Check for high "Full GC" times and/or frequent intervals (good would be a Full GC every hour with a duration of max. 10 seconds, but it depends on the system. A Full GC means a stop of the censhare process. Therefore these stops should be short and rare. Stops of less than 3 seconds and only every 3 minutes or even longer are perfect. Check that the garbage collection actually does its job: ParOldGen: 292282K-> …
https://documentation.censhare.com/censhare-2021/en/administrator-guide/operation/monitor-server-and-services/performance-analysis-of-censhare-server

RAM allocation for censhare Server

… Monitor the RAM usage of the allocated memory. Configuration practices Diagrams-JVM-memory.jpeg Considerations when increasing the allocated RAM for the censhar
https://documentation.censhare.com/censhare-2021/en/administrator-guide/operation/monitor-server-and-services/ram-allocation-for-censhare-server

Send server logs to Syslog server

… How to configure the server.xml to send log messages to a Syslog server. The configuration file is stored in cscs/app/config/server.$CSS_ID.xml. Syslog sample f