=== modified file 'src/docbkx/en/dhis2_user_man_data_administration.xml' --- src/docbkx/en/dhis2_user_man_data_administration.xml 2013-06-01 12:33:28 +0000 +++ src/docbkx/en/dhis2_user_man_data_administration.xml 2014-02-24 12:58:48 +0000 @@ -164,31 +164,21 @@ An error exists in the left-side validation rule definition. Go to Services->Data quality->Validation rule and click the "Edit" icon on the offending rule. Press "Edit right side" and make the corrections that are required. -
- Data Archive - The purpose of the data archive function is to move data which is currently not being used for analysis to a secondary storage location in order to improve performance of the application. Data can be both archived and unarchived. When archiving data one moves it from the primary storage to the secondary storage location, while unarchiving moves it from the secondary storage location to the primary. Analysis functionality in DHIS 2 heavily utilizes queries to the data value database table, and by reducing the size of this table these operations will be significantly faster. Typically one would want to archive data that is older than two years. - To archive data, first enter a start date and an end date for the time span of the data which should be archived. Then press the archive button. The operation might take a few minutes. - To unarchive data, first enter a start date and an end date for the time span of the data which should be unarchived. Then press the unarchive button. The operation might take a few minutes. - In some cases you might end up with overlapping data. For instance one might archive data for a given timespan, then later enter data for a period in that timespan. In such cases the system will automatically overwrite the oldest of the overlapping values with the newest during the archive or unarchive operation. -
-
- Beneficiary Data Archive - The purpose of the beneficiary data archive function is to move beneficiary data value which is currently not being used for analysis to a secondary storage location in order to improve performance of the application. Data can be both archived and unarchived. When archiving data one moves it from the primary storage to the secondary storage location, while unarchiving moves it from the secondary storage location to the primary. Analysis functionality in DHIS 2 heavily utilizes queries to the data value database table, and by reducing the size of this table these operations will be significantly faster. Typically one would want to archive beneficiary data that is older than two years. - To archive beneficiary data, first enter a start date and an end date for the time span of the data which should be archived. Then press the archive button. The operation might take a few minutes. - To unarchive beneficiary data, first enter a start date and an end date for the time span of the data which should be unarchived. Then press the unarchive button. The operation might take a few minutes. - In some cases you might end up with overlapping data. For instance one might archive beneficiary data for a given timespan, then later enter data for a period in that timespan. In such cases the system will automatically overwrite the oldest of the overlapping values with the newest during the archive or unarchive operation. -
Maintenance The data maintenance module has five options, each described below. - Clear data mart (aggregated datavalues) - The data mart is where DHIS 2 stores aggregated data produced during the export to data mart process. This function clears the database table which contains aggregated data element values. - - - Clear data mart (aggregated indicatorvalues) - The data mart is where DHIS 2 stores aggregated data produced during the export to data mart process. This function clears the database table which contains aggregated indicator values. + Clear analytics tables + Completely deletes the analytics tables, used to generate aggregate data for the pivot tables, GIS and reports. + + + Clear data mart (aggregated indicator and data value values) + The data mart is where DHIS 2 stores aggregated data produced during the export to data mart process. This function clears the database table which contains aggregated indicator and data element values. + + + Rebuild data mart index + Rebuilds the database indices on the aggregated data generated during a data mart process. Clear zero values @@ -202,15 +192,18 @@ Prune periods This function removes all periods which have no registered data values. Reducing the number of periods will improve system performance. + + Update category option combinations + Rebuilds the category option combinations. This may be required after altering the category options which belong to a given category. +
Resource tables Resource tables are supporting tables that are used during analysis of data. One would typically join the contents of these tables with the data value table when doing queries from - third-party applications like Microsoft Excel. Simply select the tables that should be - regenerated and press "Generate tables". Regeneration of the resource tables should - only be done once all data integrity issues are resolved. + third-party applications like Microsoft Excel. They are also used extensively by the analysis modules of DHIS2. Regeneration of the resource tables should + only be done once all data integrity issues are resolved. The resource tables are also generated automatically, every time the analytics process is run by the system. Organisation unit structure (_orgunitstructure) @@ -246,11 +239,15 @@ types to have a defined behavior. - Period structure (_periodstructure) + Period structure (_dataperiodstructure) This table provides information about all periods and which period type they are associated with. For each period type with lower frequency than itself, it contains information about which period if will fall within. + + Data element category option combinations (_dataelementcategoryoptioncombo) + This table provides a mapping between data elements and all possible category option combinations. +
@@ -263,7 +260,7 @@ The SQL views are dropped in reverse alphabetical order based on their names in DHIS 2, and created in regular alphabetical order. This allows you to have dependencies between SQL views, given that views only depend on other views which come earlier in the alphabetical - order. For instance, "ViewB" can safely depend on "ViewA". Otherwise, having views depending + order. For instance, "ViewB" can safely depend on "ViewA". Otherwise, having views depending on other view result in an integrity violation error.
@@ -319,20 +316,12 @@ - + In the example above, a data lock exception would be created for "ab Abundant Life Organization" and "ab Seventh Day Hospital" for the "Care and Support" dataset for "February 2012".
-
- Zero value storage - The zero value storage function makes it possible to define for which data elements the system should store zero values. In most cases zeros are significant only for a subset of the data elements in the database. Any zeroes entered during data entry will be ignored by default, except for data elements where the zero value storage has been enabled. -
-
- Organisation unit pruning - If you need to prune out branches of the organisational unit hierarchy, you can use the organisational unit pruning function. Keep in mind that the only selected organisational (and its children) will be kept. All other orgunits (and any data associated with them) will be deleted from the database. -
Min-Max Value Generation This administrative function can be used to generate min-max values, which are used as part of the data quality and validation process for specific organization units and data sets. Simply select the dataset from the left hand frame, and then select the required orgunits to generate the min-max values for from the organisational units selector on the right. Press the "Generate" button to generate or regenerate all min-max values. Press "Remove" to remove all min-max values which are currently stored in the database. @@ -370,7 +359,7 @@ This option is for system administrators only to use. The cache statistics shows the status of the application level cache. The application level cache refers to the objects and query results that the application is caching in order to speed up performance. If the database has been modified directly the application cache needs to be cleared for it to take effect.
- Dynamic attributes + Attributes Dynamic attributes can be used to add additional information to certain objects (namely data elements, indicators, organisation units and users). In addition to the standard attributes each of these objects have, it may be required in certain installations to have @@ -390,11 +379,12 @@
Scheduling - Data mart jobs can be automatically scheduled to run on regular intervals. Simply select the aggregation period types, organisation unit group set aggregation level, and strategy to configure how the scheduled job should run. Pressing "Start" will enable the scheduled job to run at a pre-determined time or can be run immediately by pressing "Execute now" + The analytics, resource tables and data mart can be automatically scheduled to run on regular intervals. Simply select the aggregation period types, organisation unit group set aggregation level, and strategy to configure how the scheduled job should run. If you are using surveillance rules, you can choose to run them "All daily" by selcting this option. + Pressing "Start" will enable the scheduled job to run at a pre-determined time (always at midnight based on the server time). - + === modified file 'src/docbkx/en/resources/images/maintainence/scheduling.png' Binary files src/docbkx/en/resources/images/maintainence/scheduling.png 2012-02-18 07:50:49 +0000 and src/docbkx/en/resources/images/maintainence/scheduling.png 2014-02-24 12:58:48 +0000 differ