论文部分内容阅读
新型存储系统通过内置数据压缩功能提高性能,并节省存储空间。因此,数据内容会显著影响存储系统基准测试结果。由于真实数据集规模庞大,难以复制到目标测试系统,并且大多数数据集由于隐私性无法进行共享。因此,基准测试程序需要人工生成测试数据集。为了保证测试结果的准确性,需要根据影响存储系统性能的真实数据集特征信息生成数据。现有方法 SDGen在字节级别上分析真实数据集内容分布特征,并以此生成数据集,因此能够保证内置字节级压缩算法的存储系统测试结果准确。但是SDGen并未分析真实数据集的词级别内容分布特征,因此不能保证内置词级别压缩算法的存储系统测试结果准确,本文提出了一种基于Lognormal概率分布模型的文本数据集生成方法Text Gen。该方法根据真实数据集的词切分结果建立语料库,分析语料库中词的分布特征,利用最大似然估计得到词分布的Lognormal模型参数,根据模型采用蒙特卡洛方法生成数据内容。该方法生成数据集所消耗的时间只与生成数据集规模相关,具有线性的时间复杂度O(n)。本文收集了四种数据集验证方法有效性,并通过一种典型的词级别压缩算法——ETDC(End-Tagged Dense Code)进行测试。实验结果表明:相比SDGen,Text Gen生成文本数据集性能更高,并且,生成数据集用于压缩测试后与真实数据集的压缩速率、压缩率相似程度更高。
The new storage system boosts performance and saves storage space with built-in data compression. Therefore, the data content can significantly affect the storage system benchmark results. Due to the large size of the real data set, it is difficult to copy to the target test system, and most data sets can not be shared due to privacy. Therefore, the benchmark program needs to manually generate the test data set. To ensure the accuracy of test results, data needs to be generated according to the real data set characteristic information that affects the performance of the storage system. The existing method SDGen analyzes the distribution characteristics of real data set contents at the byte level and generates data sets thereby ensuring that the storage system test results of the built-in byte-level compression algorithm are accurate. However, SDGen does not analyze the lexical content distribution of real datasets. Therefore, it is impossible to guarantee the storage system test result of the built-in word-level compression algorithm is accurate. In this paper, a text dataset generation method Text Gen based on the Lognormal probability distribution model is proposed. The method builds a corpus based on the word segmentation result of real data set, analyzes the distribution characteristics of words in the corpus, uses the maximum likelihood estimation to get the Lognormal model parameter of word distribution, and generates the data content according to the model by Monte Carlo method. The time consumed by this method to generate a dataset is only related to the size of the generated dataset and has a linear time complexity O (n). In this paper, four data sets are collected to verify the effectiveness of the proposed method and tested by a typical word-level compression algorithm called ETDC (End-Tagged Dense Code). The experimental results show that Text Gen generated text datasets have higher performance than SDGen, and the generated datasets are more similar to the compression rate and compression ratio of the real datasets after the compression datasets are used to generate the datasets.