> M ?bjbj== .*WWTlddd
n"UUUUU"2>WRZZZZbbb$ %Qbalbbb%zzZZvbz"ZZbˁrsZ2WB."MUvטs0zs""zzzz Steganalysis of Audio Based on Audio Quality Metrics(
Hamza zera,d, 0smail Avc1ba_b, Blent Sankura, Nasir Memonc
aDepartment of Electrical and Electronics Engineering, Boazii University, Bebek, 0stanbul, Turkey
bDepartment of Electronics Engineering, Uluda University, Grkle, Bursa, Turkey
cDepartment of Computer and Information Science, Polytechnic University, Brooklyn, NY, USA
dNational Research Institute of Electronics and Cryptology, Marmara Research Center, Gebze, Turkey
ABSTRACT
Classification of audio documents as bearing hidden information or not is a security issue addressed in the context of steganalysis. A cover audio object can be converted into a stegoaudio object via steganographic methods. In this study we present a statistical method to detect the presence of hidden messages in audio signals. The basic idea is that, the distribution of various statistical distance measures, calculated on cover audio signals and on stegoaudio signals visvis their denoised versions, are statistically different. The design of audio steganalyzer relies on the choice of these audio quality measures and the construction of a twoclass classifier. Experimental results show that the proposed technique can be used to detect the presence of hidden messages in digital audio data.
Keywords: Steganalysis, watermarking, audio quality measures, feature selection, support vector machine
1. INTRODUCTION
Given the proliferation of digital multimedia data and the inherent redundancy, despite compression, in digital documents, there has been an interest in using multimedia for the purpose of steganography. Steganography exploits the covert channel that can be made possible in multimedia documents and aims to achieve message communication, whose effect and presence must go totally undetected except by the intended recipient. In other words steganographic techniques strive to hide the occurrence of a message communication [20].
To achieve secure and undetectable communication, stegoobjects, documents containing a secret message, should be indistinguishable from coverobjects, documents not containing any secret message. In this respect, steganalysis is the set of techniques that aim to distinguish between coverobjects and stegoobjects. Steganalysis itself can be implemented in either a passive warden or active warden style. A passive warden simply examines the message and tries to determine if it potentially contains a hidden message. If it appears that it does, then the document is stopped; otherwise it will go through. An active warden, on the other hand, can alter messages deliberately, even though there may not see any trace of a hidden message, in order to foil any secret communication that can nevertheless be occurring. The amount of change the warden should not exceed a point where subjective quality of the suspected stego audio track is altered significantly. In this paper we are mainly concerned with passive warden steganalysis. Specifically, we intend to develop steganalysis tools specifically for the audio documents. It should be noted that although there has been quite some effort in the steganalysis of digital images [1, 8, 11, 27], steganalysis of digital audio is relatively unexplored.
The idea underlying our audio steganalysis is the fact that any steganographic technique will invariably perturb the statistics of the cover signal to some extent. In fact, for the additive class of message embedding techniques, the presence of steganographic communication in a signal can be modeled as additive noise in time or frequency domain. Even for nonadditive techniques, such as substitutive embedding, the difference between the stegosignal (y) and the cover signal (x), can be considered to be additively combined with the cover signal, y = x + (yx), albeit in a signaldependent manner. The presence of the steganographic artifact can then be put into evidence by recovering the original cover signal, or alternatively, by denoising the suspected stegosignal. The steganalyzer can directly apply a statistical test on the denoising residual, EMBED Equation.3, where EMBED Equation.3 is the estimated original signal. This residual must also correspond to the artifact due to embedding of a hidden message. Notice that, even if the test signal does not contain any hidden message, the denoising step will still yield an output, whose statistics can be expected, however, to be different from those of a true embedding.
Alternately the steganalyzer can be constructed as a distortion meter between the test signal and the estimated original signal,EMBED Equation.3, using again denoising. For this purpose, one can use various audio signal quality measures to monitor the extent of steganographic distortion. Here, we implicitly assume that the distance between a smooth signal and its denoised version is less than the distance between a noisy signal and its denoised version. The implicit assumption is that any embedding effort will render the signal less predictable and smooth. The perturbations, due to the presence of embedding, translate to the feature space, where the audio quality features plot in different parts of the feature space between the marked and nonmarked signals. An alternate way to sense the presence of marking would be to monitor the change in the predictability of a signal, temporally and/or across scales.
In the proposed steganalysis method, as presented in Fig. 1, the underlying idea is to first isolate the stegosignal by subtracting the estimated original signal from the given test signal. The original signal itself is estimated by some denoising scheme or by a modelbased algorithm. In the case of additive message embedding, denoising algorithms become effective tools for the estimation of the original, nonmarked signal [24]. We have used the wavelet shrinkage method [6] to reduce the noise in the expectation of removing the hidden message. In this scheme, a softthresholding nonlinearity is applied to the wavelet coefficients such that its inverse wavelet transform yields the denoised signal. An alternate scheme would be to use sparse code shrinkage based on independent component analysis. [9]. For nonadditive message embedding, the original signal can be estimated via maximum likelihood (ML) or maximum a posteriori (MAP) approaches.
Figure 1. Block diagram of the steganalysis method.
The rest of this paper is organized as follows. In Section 2, we discuss audio quality measures. In Section 3 we address the problem of feature selection for the twoclass problem. Section 3 also discusses the classifiers used on a comparative basis. The test database, experiments conducted, and the main results of classification and steganalysis are given in Section 4 Finally, conclusions are drawn in Section 6.
2. SELECTION OF AUDIO FEATURES FOR STEGANALYSIS
In this section, we investigate several audio quality measures for the purpose of audio steganalysis. We consider an audio quality metric to be, in fact, as a functional that converts its input signal into a measure that purportedly is sensitive to the presence of a steganographic message embedding. We search for measures that reflect the quality of distorted or degraded audio signal visvis its original in an accurate, consistent and monotonic way. Such a measure, in the context of steganalysis, should respond to the presence of hidden message with minimum error, should work for a large variety of embedding methods, and its reaction should be proportional to the embedding strength.
We consider two categories of measures that quantify signal distortion, namely perceptual and nonperceptual. In the perceptual category, the distortion measures are specific to the speech/audio and take into consideration the properties of the human auditory system. All audio quality measures share a twocomponent structure. The first one is a perceptual transformation module, which transforms the input into a perceptually relevant domain such as temporal, spectral, or loudness. The domain filters typically take into account the psychoacoustic models. The second component is the cognition/judgment module, which compares the two perceptually transformed signals in order to generate an estimated distortion. In the nonperceptual category, the measures try to penalize the distance between a test signal and its reference signal. The basis of comparison varies from Euclidean distance to artificial neural network or fuzzy logic. Although most of them are developed for speech quality, they can be easily extended to audio band.
The audio quality measures tested for the design of the steganalyzer are listed in Table 1. Their detailed descriptions can be found in Appendix A.
Table 1: Audio quality measures tested for the design of the steganalyzer
Perceptualdomain measuresNonperceptual domain measuresTimedomain measuresFrequencydomain measuresBark Spectral Distortion (BSD)Signaltonoise ratio (SNR)LogLikelihood ratio (LLR)Modified Bark Spectral Distortion (MBSD)Segmental signaltonoise ratio (SNRseg)LogArea ratio (LAR)Enhanced Modified Bark Spectral Distortion (EMBSD)Czenakowski distance (CZD)ItakuraSaito distance (ISD)Perceptual Speech Quality Measure (PSQM)COSH distance (COSH)Perceptual Audio Quality Measure (PAQM)Cepstral distance (CD)Measuring Normalizing Block 1 (MNB1)ShortTime FourierRadon Transform distance (STFRT)Measuring Normalizing Block 2 (MNB2)Spectral Phase Distortion (SP)Weighted Slope Spectral distance (WSS)Spectral PhaseMagnitude Distortion (SPM)
3. FEATURE SELECTION and CLASSIFIER DESIGN
It has been observed that filtering an audio signal with no watermark message causes changes in the quality metrics differently than that of an embedded audio signal. For feature selection we have used two approaches, namely, analysis of variance (ANOVA) [19] and Sequential Floating Search method [17].
Analysis of Variance (ANOVA)
The analysis of variance, known as ANOVA, is a general technique for statistical hypothesis testing, often used when an experiment contains a number of groups or conditions, and one wants to see whether there are any statistically significant differences between them. The most general basic hypothesis is:
EMBED Equation.3
EMBED Equation.3
For N pieces of data, an F test with k1 and Nk degrees of freedom is applied. In our case k is equal to 2. A high F value indicates that at least one pair of means are not equal. The confidence level determines the threshold for the F. In this study we choose it as 95 percent. We performed the ANOVA test for each steganographic test separately; the results are given in Table 2.
Sequential Floating Search Method (SFS)
While ANOVA is testing each feature separately, the SFS algorithm takes into consideration their intercorrelation and tests features in ensembles [17]. The algorithm can be described as follows:
Choose best two features from the K features, which are those features yielding the best classification result;
Add the most significant feature from the remaining features set, where selection is made on the basis of the feature that contributes most to the classification result when considered all together;
Determine the least significant feature from the selected set by conditionally removing features onebyone; check if the removal of the least significant one improves or reduces the classification result, if it improves, remove this feature and go to step 3, else do not remove this feature and go to step 2.
Stop when the number of selected features equals the number of features required.
We have applied the SFS method to all data hiding methods first under the limitation of an equal number of features as in ANOVA, and then without any such constraint. The results are depicted in Table 3.
Table 2: The discriminatory features determined by ANOVA per embedding method
MethodsSNRSNRsLLRLARCOSHCDMISDBSDMBSDEBSDWSSDPAQMPSQMMNB1MNB2CZDSPSPMSTFRTDSSS((((((((((((FHSS(((((((((ECHO(((((DCTwHAS(((((STEGA((STOOLS
Table 3: The discriminatory features selected, per embedding method, by the SFS method, (a) linear regression used for classification, (b) SVM is used for classification.
a. MethodsSNRSNRsLLRLARCOSHCDMISDBSDMBSDEBSDWSSDPAQMPSQMMNB1MNB2CZDSPSPMSTFRTDSSS(((FHSS(((((((((ECHO((((((((DCTwHAS(((((((((((STEGA(((STOOLS((((
b. MethodsSNRSNRsLLRLARCOSHCDMISDBSDMBSDEBSDWSSDPAQMPSQMMNB1MNB2CZDSPSPMSTFRTDSSS(((((FHSS(((((((((((((ECHO((((((DCTwHAS(((((((STEGA((((STOOLS(((
As can be expected, there is substantial overlap between these tables, especially in the case of DSSS and FHSS watermarking, and yet there exist crucial differences. The spread spectrum Cox et. al. technique, (DCTwHAS) which was the most difficult to detect, necessitates more features than all other algorithms. In general passive warden techniques make use of fewer features as compared to active warden techniques. One can notice that PAQM and LLR features are in demand by a larger number of techniques. Another interesting note is that, the best individual feature, as determined by ANOVA, can be quite different from the feature set when their correlation is taken into account, as when determined by the SFS method. For example, the EBSD feature is in spotlight for ANOVA, while it is completely wiped out by the SFS selection technique.
In order to classify the signals as marked or not marked based on the selected audio quality features, we tested and compared two types of classifiers, namely, linear regression analysis and support vector machines.
Regression Analysis Classifier
In the design of a regression classifier, we regressed the distance measure scores to, respectively, 1 and 1, depending upon whether the audio did not or did contain a hidden message. In the regression model [19], we expressed each decision label EMBED Equation.3 as a linear combination of quality measures, EMBED Equation.3 , where EMBED Equation.3 is the vector of qfeatures selected and EMBED Equation.3 are the regression coefficients. The regression coefficients are predicted in the training phase, and than they are used in testing phase. In the test phase, the incoming audio signal is denoised and the selected quality metrics are calculated, then the distance measure is obtained by using the predicted regression coefficients. If the output exceeds the threshold 0, then the decision is that the audio contains message, otherwise the decision is that the audio dos not contain any message.
Support Vector Machine Classifier
The support vector method is based on an efficient multidimensional function optimization [23], which tries to minimize the empirical risk, which is the training set error. For the training feature data EMBED Equation.3 , EMBED Equation.3 , gi EMBED Equation.3[1,1], the feature vector F lies on a hyperplane given by EMBED Equation.3 , where w is the normal to the hyperplane. The set of vectors is said to be optimally separated if it is separated without error and the distance between the closest vectors to the hyperplane is maximal. A separating hyperplane in canonical form, for the ith feature vector and label, must satisfy the following constraints:
EMBED Equation.3
The distance d(w,b;F) of a feature vector F from the hyperplane (w,b) is,
EMBED Equation.3
The optimal hyperplane is obtained by maximizing this margin. In our study we use a polynomial kernel function to separate the data. The tests and results are given in the next section.
4. EXPERIMENTAL RESULTS
In our experiments we have experimented with overall six different data hiding algorithms, four watermarking and two LSB steganographic techniques. The watermarking techniques are directsequence spread spectrum (DSSS) [5], frequency hopping with spread spectrum (FHSS) [7], frequency masking technique with DCT (DCTwHAS) [7], and echo watermarking [5]. The steganographic methods are Steganos [21] and Stools [22]. These tools are selected on the basis of being most popular methods with readily available software. We use an audio data set of 100 records, where each one contains a three or four second sentence
For testing and training, we used 100 audio records, by embedding we got 200 records, of which 100 are embedded and 100 kept as original records. A mixture containing 50 embedded records and 50 original records are used for training, while an independent similar mixture is kept for testing purposes. The test runs are repeated for the feature set selected by the ANOVA method and by the SFS method. Furthermore the two classification methods, linear regression and SVM, are separately run with the feature selection methods.
4.1 Detection results for individual algorithms
First we present the detection results per individual embedding method. In other words, in each case the classifier has the task of classifying an audio track as to whether it bears marking or not. The results of the four combinations (two feature selection methods, two classification methods) are tabulated in Tables 4 and 5. One can notice that the proposed scheme, especially the feature selection based on Sequential Floating Search method, coupled with the Support Vector Machine classifier (the rightmost column in Table 5) achieves very satisfactory results. The DSSS, ECHO, STOOLS and, in a sense, FHSS methods of embedding can be detected with no error. The DCTwHAS proves to be the most difficult to track where we can only achieve a specificity of 80% only.
Table 4: Test results using linear regression classifier for individual methods.
MethodANOVA featuresSFS features (yielding max. detection)Miss Det.False Det.Miss Det.False Det.DSSS0/500/500/500/50FHSS2/501/501/500/50ECHO3/505/500/501/50DCTwHAS13/5012/505/508/50STEGA4/504/500/506/50STOOLS/50/500/509/50Table 5: Test results using SVM classifier for individual methods.
MethodANOVA featuresSFS features (yielding max. detection)Miss Det.False Det.Miss Det.False Det.DSSS2/503/500/500/50FHSS4/504/502/500/50ECHO0/506/500/500/50DCTwHAS18/5010/5010/507/50STEGA3/506/501/506/50STOOLS/50/500/500/50
4.1 Detection for ensemble of algorithms
We have carried out similar tests for the ensemble of four active warden schemes (DSSS, FHSS, ECHO, DCTwHAS), and separately, for the ensemble of two passive warden schemes (STEGA, STOOLS). In other words, when a document was presented, the detector must classify it into marked and nonmarked, but without knowing which of the four activewarden methods or which of the two passivewarden methods was used for embedding. Different sets of 100 speech records were marked separately with each watermarking method, to result overall in 400 marked and 400 nonmarked records. Similarly, different sets of 100 speech records were marked separately with each steganographic method, to result overall in 200 marked and 200 nonmarked records. These were all equally shared between training and testing patterns. Here we also realize that the SFSSVM combination gives the best outcome. Therefore only those results are presented in the Tables 6 and 7.
Table 6: The discriminatory features determined by SFS for ensemble of methods with SVM as the classifier.
MethodsSNRSNRsLLRLARCOSHCDMISDBSDMBSDEBSDWSSDPAQMPSQMMNB1MNB2CZDSPSPMSTFRTWaterm.((((((((((((((Stegan.(((((
Table 7: Test results using SVM classifier for ensemble of methods.
MethodANOVA featuresSFS features (yielding max. detection)Miss Det.False Det.Miss Det.False Det.Waterm.30/20044/20028/20034/200Stegan.20/10035/1009/10018/100
5. CONCLUSION
In this study, an audio steganalysis technique is proposed and tested. The objective audio quality measures, giving clues to the presence of hidden messages, are searched thoroughly. With the analysis of variance (ANOVA) method, we have determined the best individual features, and with sequential floating search (SFS) technique, we have selected features taking into account their intercorrelation. We have comparatively evaluated two classifiers, namely, linear regression and support vector machines. The SFS feature selection coupled with SVM classifier gave the best results. The output of the classifier is the decision whether the tested document carries any hidden information or not.
The proposed method has been tested on four different watermarking and two steganographic data hiding techniques, first individually, and then in combinations. Experimental results show that the proposed scheme achieves satisfactory results, in that, in individual tests we achieve almost perfect detection except for the DCTwHAS watermarking method. In combination tests, the results are promising, but need to be improved. In this respect, we are investigating new ways of obtaining segment averages of quality measures, for example, using the power mean or the maximum. We will also consider a combination of classifiers; each specialized on a subset of data hiding methods.
APPENDIX: AUDIO QUALITY MEASURES
In this Appendix we give brief descriptions of the quality measures used. We categorize the measures into perceptual and nonperceptual groups, and furthermore, the nonperceptual group into timedomain and frequencydomain measures. The original signal (the cover document) is denoted EMBED Equation.3 while the distorted signal (the stegodocument) as EMBED Equation.3 . In some cases the distortion is calculated from the overall data (SNR, CZD, SP, SPM). However most of the case, the distortion is calculated for small segments and by averaging these, the overall measure is obtained (SNRs, BSD, MBSD, EBSD, PAQM, PSQM, LLR, LAR, ISD, COSH, CDM, WSSD). Here segment size is taken to be 20ms (320 sample for 16kHz signal). The same size is used as windows size for the technique MNBs and STFRT.
TimeDomain Measures: These measures (SNR, SNRseg, CZD) compare the two waveforms in the time domain.
Segmental SignaltoNoise Ratio (SNRseg): SNRseg is defined as the average of the SNR values over short segments:
EMBED Equation.3
where x(i) is the original audio signal, y(i) is the distorted audio signal. The length of segments is typically 15 to 20 ms for speech. The SNRseg is applied for frames which have energy above a specified threshold in order to avoid silence regions. SignaltoNoise Ratio (SNR), is a special case of SNRseg, when M=1 and one segment encompasses the whole record [18]. The SNR is very sensitive to the time alignment of the original and distorted audio signal. The SNR is measured as
EMBED Equation.3
This measure has been criticized for being a poor estimator of subjective audio quality [16].
Czenakowski Distance (CZD): This is a correlationbased metric [2], which compares directly the time domain sample vectors
EMBED Equation.3
FrequencyDomain Measures: These measures (LLR, LAR, IS, COSH, CDM, WSSD, SPD, SPMD, STFRT) compare the two signals on the basis of their spectra or in terms of a linear model based on second order statistics.
LogLikelihood Ratio (LLR): The LLR, also called as Itakura distance [10,12], considers an allpole linear predictive coding (LPC) model of speech segment EMBED Equation.3, where {a(m), m=1,..p} are the prediction coefficients and u[n] is an appropriate excitation source. The LLR measure then is defined as
EMBED Equation.3
where EMBED Equation.3 is the LPC coefficient vector for the original signal x[n], EMBED Equation.3is the corresponding vector for the distorted signal y[n], with respective covariance matrices, EMBED Equation.3 and EMBED Equation.3.
Log Area Ratio (LAR): The logarea ratio measure is another LPCbased technique, which uses PARCOR (partial correlation) coefficients [18]. The PARCOR coefficients form a parameter set derived from the shorttime LPC representation of the speech signal under test. The area ratio functions of these coefficients give the LAR.
ItakuraSaito Distance Measure (ISD): This is the discrepancy between the power spectrum of the distorted signal Y(w) and that of the original audio signal, X(w):
EMBED Equation.3
COSH Distance Measure: COSH distance is the symmetric version of the ItakuraSaito distance. Here the overall measure is calculated by averaging the COSH values over the segments also.
EMBED Equation.3
Cepstral Distance Measure (CDM): The cepstral distance measure is a distance, defined between the cepstral coefficients of the original and distorted signals. The cepstral coefficients can also be computed by using LPC parameters [13]. An audio quality measure, based on the L cepstral coefficients cx(k) and cy(k), of the original and distorted signals respectively, can be computed as EMBED Equation.3 EMBED Equation.3for the mth frame. The distortion is calculated over all frames using
EMBED Equation.3
where M is the total number of frames, and w(m) is a weight associated with the mth frame. The weighting could, for example, be the energy in the reference frame. In this study we use a 20 ms frame length and use the energy of the frame as weights.
Spectral Phase and Spectral PhaseMagnitude Distortions: The phase and/or magnitude spectrum differences [2] have been observed to be sensitive to image and data hiding artifacts. They are defined as
EMBED Equation.3
EMBED Equation.3
where SP is the spectral phase distortion and SPM is the spectral phasemagnitude distortion, EMBED Equation.3 is the phase spectrum of the original signal, EMBED Equation.3 is the phase of the distorted signal, X(w) is the magnitude spectrum of the original signal and Y(w) is magnitude spectrum of the distorted signal, and EMBED Equation.3 is the is chosen to attach commensurate weights to the phase and magnitude terms.
ShortTime FourierRadon Transform Measure (STFRT): Given a short time Fourier transform (STFT) of a signal, its time projection gives us the magnitude spectrum while its frequency projection yields the magnitude of the signal itself. More generally, rather than taking only the vertical and horizontal projections, if we consider all the other angles, we obtain the Radon transform of the STFT mass. We define the meansquare distance of Radon transforms of the STFT of two signals as a new objective audio quality measure.
Perceptual measures: These measures (WSSD, BSD, MBSD, EMBSD, PAQM, PSQM, MNB) take explicitly into account the properties of the human auditory system.
Bark Spectral Distortion (BSD): The BSD measure is based on the assumption that speech quality is directly related to speech loudness [26]. The signals are subjected to critical band analysis, equalloudness preemphasis, and intensityloudness power law. The BSD estimates the overall distortion by using the average Euclidian distance between loudness vectors that of the reference and of the distorted audio. The Bark spectral distortion is calculated as
EMBED Equation.3
where K is the number of critical bands, and EMBED Equation.3 and EMBED Equation.3 are the Bark spectra in the ith critical band corresponding to the original and the distorted speech, respectively. In this study the BSD is extended until audio bands. For speech the 18 critical bands (which is up to 3.7 kHz) are used, in our stuty in order to measure the distortions on audio bands we have calculated and used the 25 critical bands (which is up to 15.5kHz). The overall distortion is calculated by averaging the BSD values of the speech segments.
Modified Bark Spectral Distortion (MBSD): The MBSD is a modification of the BSD, which incorporates noisemasking threshold to differentiate between audible and inaudible distortions [28]. Any inaudible loudness difference, which is proportional to EMBED Equation.3, below the noisemasking threshold is excluded in the calculation of the perceptual distortion. The perceptual distortion of the nth frame is defined as the sum of the loudness difference which is greater than the noise masking threshold and is formulated as:
EMBED Equation.3
where M(i) and EMBED Equation.3 denote the indicator of perceptible distortion and the loudness difference in the ith critical band, respectively, K is the number of critical bands. The global MBSD value is calculated by averaging the MBSD scores over nonsilence frames.
Enhanced Modified Bark Spectral Distortion (EMBSD): EMBSD is a variation of MBSD in that only the first 15 loudness components (instead of the 24Bark bands) are used to calculate loudness differences, loudness vectors are normalized, and a new cognition model is assumed based on post masking effects as well as temporal masking as in [29].
Perceptual Audio Quality Measure (PAQM): In PAQM, a model of the human auditory system is emulated [3]. The transformation from the physical domain to the psychophysical (internal) domain is performed first by timefrequency spreading and level compression, such that masking behavior of the human auditory system is taken into account. Here the signal is first transformed into shorttime Fourier domain, then the frequency scale is converted into pitch scale z (in bark) and the signal is filtered to transfer from outer ear to inner ear. This results in the powertimepitch representation. Subsequently the resulting signal is smeared and convolved with the frequencyspreading function, which is finally transformed to compressed loudnesstimepitch representation. The quality of an audio system can now be measured using this compressed loudnesstimepitch representation.
Perceptual Speech Quality Measure (PSQM): PSQM is as a modified version of the PAQM [4], in fact the optimized version for speech. For example, for loudness computation, PSQM does not include temporal or spectral masking and it applies a nonlinear scaling factor to the loudness vector of distorted speech. PSQM has been adopted as ITUT Recommendation P.861.
Weighted Slope Spectral Distance Measure (WSSD): A smooth shorttime audio spectrum can be obtained using a filter bank, consisting of thirtysix overlapping filters of progressively larger bandwidth [14]. The filter bandwidths approximate critical bands in order to give equal perceptual weight to each band. Klatt uses weighted differences between the spectral slopes in each band [15] since the spectral variation plays an important role in human perception of audio quality. The spectral slope is computed in each critical band as, EMBED Equation.3 and EMBED Equation.3 , where {X(k), Y(k)} are the spectra in decibels, EMBED Equation.3 are the first order slopes of these spectra, and k is the critical band index. Next, a weight for each band is calculated based on the magnitude of the spectrum in that band:
EMBED Equation.3
where the weight w(m) is chosen according to a spectral maximum. The EMBED Equation.3 is computed separately for each 12 ms audio segment and then by averaging the overall distance.
Measuring Normalizing Blocks (MNB): The MNB emphasizes the important role of the cognition module for estimating speech quality [25]. The technique is based on a transformation of speech signals into an approximate loudness domain through frequency warping and logarithmic scaling, which are the two important factors in the human auditory response. MNB considers human listeners sensitivity to the distribution of distortion, so it uses hierarchical structures that work from larger time and frequency scales to smaller time and frequency scales. MNB integrates over frequency scales and measures differences over time intervals as well it integrates over time intervals and measures differences over frequency scales. These MNBs are linearly combined to estimate overall speech distortion.
REFERENCES
Avc1ba_, 0., Memon N., Sankur B., Steganalysis using image quality metrics , IEEE Trans. on Image Process., January 2003.
Avc1ba_, 0., Sankur B., and Sayood K., Statistical evaluation of image quality metrics , Journal of Electronic Imaging 11(2), 206 223 (April 2002).
Beerends, J. G. and J. A. Stemerdink, A perceptual audio quality measure based on a psychoacoustics sound representation, J. Audio Eng. Soc., vol. 40, pp. 963978, Dec. 1992.
Beerends, J. G. and J. A. Stemerdink, A perceptual speech quality measure based on a psychoacoustic sound representation, J. Audio Eng. Soc., vol. 42, pp. 115123, Mar. 1994.
Bender, W., D. Gruhl, N. Morimoto, and A. Lu, Techniques for data hiding, IBM Systems Journal, vol. 35, no: 3&4, pp. 313336, 1996.
Coifman, R. R., and D. L. Donoho, Translationinvariant denoising, in Wavelets and Statistics A. Antoniadis and G. Oppenheim, Eds, SpringerVerlag lecture notes, San Diego, 1995.
Cox, I., J. Kilian, F. T. Leighton, and T. Shamoon, Secure spread spectrum watermarking for multimedia, IEEE Trans. on Image Process., vol. 6, no: 12, pp. 16731687, December 1997.
Gojan, J., M. Goljan and R. Du, Reliable detection of LSB steganography in color and grayscale images, Proc., of the ACM Workshop on Mult. And Secur., Ottawa, CA, pp. 2730, October 5, 2001.
Hyvarinen A., P. Hoyar, and E. Oja, Sparse code shrinkage for image denosing, In Proc. of IEEE Int. Joing. Conf. of Neural Networks, pp. 859864, Anchorage, Alaska.
Itakura, F., Minimum prediction residual principle applied to speech recognition, IEEE Trans. Acoust., Speech and Signal Processing, vol. ASSP23, no. 1, pp. 6772, Feb. 1975.
Johnson, N.F., S. Jajodia, Steganalysis of images created using current steganography software, in David Aucsmith (Ed.): Information Hiding, LNCS 1525, pp. 3247. SpringerVerlag Berlin Heidelberg, 1998.
Juang , B. H., On using the ItakuraSaito measure for speech coder performance evaluation, AT&T Bell Laboratories Tech. Jour., vol. 63, no. 8, pp. 14771498, Oct. 1984.
Kitawaki, N., H. Nagabuchi, and K. Itoh, Objective quality evaluation for lowbitrate speech coding systems, IEEE J. Select. Areas Commun., vol. 6, pp. 242248, Feb. 1988.
Klatt, D. H., A digital filter bank for spectral matching, Proc. 1976 IEEE ICASSP, pp. 573576, Apr. 1976.
Klatt, D. H., Prediction of perceived phonetic distance from criticalband spectra: a first step, Proc. 1982 IEEE ICASSP, Paris, pp. 12781281, May 1982.
McDermott, B. J., C. Scaglia, and D. J. Goodman, Perceptual and objective evaluation of speech processed by adaptive differential PCM, IEEE ICASSP, Tulsa, pp. 581585, Apr. 1978.
Pudil, P., J. Novovicova, and J. Kittler, Floating search methods in feature selection, Pattern Recognition Letters, 15, pp. 11191125, 1994.
Quackenbush, S. R., T. P. Barnwell III, and M. A. Clements, Objective Measures of Speech Quality, Prentice Hall, Englewood Cliffs, 1988.
Rencher, Methods of multivariate data analysis, New York, John Wiley, 1995.
Simmons, G. J., Prisoners problem and the subliminal channel, CRYPTO83Advances in Cryptology, pp. 5167, August, 2224, 1984.
Steganos, www.steganos.com.
Stools, A. Brown, STools version 4.0, Copyright C. 1996, http://members.tripod.com/steganography/stego/stools4.html.
Vapnik, V., The Nature of Statistical Learning Theory. Springer, New York, 1995.
Voloshynovsky, S., S. Pereira, V. Iquise, and T. Pun, Attack modeling: towards a second generation watermarking benchmark, Signal Processing, vol. 81, pp. 11771214, 2001.
Voran, S., Objective estimation of perceived speech quality, part I: development of the measuring normalizing block technique, IEEE Transactions on Speech and Audio Processing, in Press, 1999.
Wang, S., A. Sekey, and A. Gersho, An objective measure for predicting subjective quality of speech coders, IEEE J. Select. Areas Commun., vol. 10, pp. 819829, June 1992.
Westfeld, A. Pfitzmann, Attacks on steganographic systems, in Information Hiding, LNCS 1768, pp. 6166, SpringerVerlag Heidelberg, 1999.
Yang, W, M. Dixon, and R. Yantorno, A modified bark spectral distortion measure which uses noise masking threshold, IEEE Speech Coding Workshop, pp. 5556, Pocono Manor, 1997.
Zwicker , E. and H. Fastl, Psychoacoustics Facts and Models, SpringerVerlag, 1990.
( This work was partially supported by TB0TAK Project 102E018, and Boazii Research Fund project 01A201. Nasir Memon was also supported by AFOSR Award Number F496200110243
Incoming document
Decision and classification
Decision:
marked or not marked
Feature selection
Extract
hidden message
jlnpVX%
7=FO .2@6%&6789ABRSTU+,<=j#A
UVhnH tH
jEHUj1A
UVhnH tH
jEHUj#A
UVhnH tH jUh6]]6
5CJ\55\CJH*CJCJ H* j 0JCJ CJ :npV
Q$a$d[$\$`xd>=>?"#$X ,!!'((((D)))**p*q*****L+M+q+r+++++++,,,Fd....ĺjA
CJUVaJmHsH
jCJU
6>*CJ]
5CJ\5B*CJ\ph
5CJ\CJaJCJCJ5CJ\aJ5CJCJjCJUmHnHtH u jU
jEHU:QR !"$XY ,!!#''(((($d@x$Ifa$x$xa$ $d@xa$d@(()))))C)yyyd@x$Ifl$$Ifl40%`064
la$d@x$Ifa$C)D)c)))\wlld@x$Ifx$If$$Ifl4F%
064
la))))**8*S*p*q**vh]v]]vd@x$If$d@x$Ifa$$x$Ifa$~$$IflF%
064
la
********++L+vkvlk]$d@x$Ifa$$x$Ifa$~$$IflF%
064
lad@x$If
L+M+r+s+++++++vkkLckU$d@x$Ifa$x$Ifd@x$If$x$Ifa$~$$IflF%
064
la +++,,FGd....D0E0m011211i23334
&F
d@xKd@^K`d@x$a$ $d@xa$d@..........//0/D0E0f0g0m021344455P5_5p5q5v5w5x5y5z5{55}5555555555555555555555555555카즞즞 jCJaJCJaJCJaJmHsHCJaJCJ5CJ>*CJ
6>*CJ]5B*\phj&CJEHUjA
CJUVaJmHsHCJ
jCJUjCJEHU>444555#5'5+5054585<5A5F5K5P5U5Z5_5c5f5j5$d$If\$a$ d$If\$d$5$7$8$9DH$If\$$x7$8$H$a$7$8$H$j5p5q5v5x5z55~55555555555555555ɈFf$d$5$7$8$9DH$If\$a$ d$If\$Ff
$d$If\$a$5555555555555555555555555xFf$d$5$7$8$9DH$If\$a$$d$If\$a$ d$If\$55555555555555555555555555555555555555566 6
666616966677>7?7E7F7G7H7I7J7P7Q7T7U7W7X7Z7[7a7b7c7d7e7f7h7i7n7o7p7q7u7v7w7x7z7{77CJaJmHsHCJaJCJ5CJCJ jCJaJCJaJV555555555555555555555555555 d$If\$Ff'$d$If\$a$5555555555555555566666666 66p d$If\$Ff$d$If\$a$66
6666666666666 6!6"6#6$6%6&6'6(6)6*6+6l d$If\$Ff=$d$If\$a$+6,66.6/60616666666677
77777#7(77 d$If\$d$5$7$8$9DH$If\$$xa$Ff$d$If\$a$7174787>7?7D7E7G7I7K7L7M7N7O7P7Q7R7S7T7U7V7W7p$x$5$7$8$9DH$Ifa$$x$Ifa$ d$If\$FfS#$d$If\$a$W7X7Y7Z7[7`7a7c7e7g7h7j7k7l7m7n7p7r7s7t7u7w7y7z77܈ d$If\$Ff&$x$Ifa$$x$5$7$8$9DH$Ifa$7}77777777777777777777777777Ff$x$Ifa$ d$If\$Ffi*7}7777777777777777777777777777777777777777777777777777777777788
8C8R8c8d8j8k8l8m8n8o8u8v8x8y8888888888888CJaJmHsHCJaJCJ jCJaJCJaJCJX777777777777777777777777777t d$If\$Ff1$x$Ifa$777777777777777777777777777 d$If\$Ff
5$x$Ifa$777777888
88888#8'8+8/84898>8C8H8$d$If\$a$ d$If\$d$5$7$8$9DH$If\$$xa$Ff8$x$Ifa$H8M8R8V8Y8]8c8d8i8j8l8n8p8q8r8s8t8u8w8x8z8{88}8~88xx$If d$If\$Ff <$d$If\$a$888888888888888888888888888Ff6C d$If\$Ff?x$If88888888888888888888888888888888888888888888888888888888999999999%9'9T=U=t=l>m>>>>>>>>
jTEHUjxA
UVnH tH jU
6>*CJ]CJCJ jCJaJCJaJN8888888888888888888888888888 d$If\$FfFx$If8888888888888888888888888889x d$If\$FfLJx$If9999999999999999999999 9!9"9#9$9%9x d$If\$FfMx$If%9&9'9w<x<T=U=t=AA@ACDODdD E!E:E;EGGI$a$$d@x7$8$H$a$d@x7$8$H$ $d@xa$d@x$xa$FfbQ>>>>>>>>>>??%?&?'?(?AA?A@A
BBBB B!B#B$B7B8B9B:B?B@BABBBRBSB}ujj5A
6UV]j6U] 6H*]
j`EHUjzA
UVnH tH 6]
j^EHUjA
UVnH tH CJmHsH6>*CJ]mHsHCJ
jd\EHUj~zA
UVnH tH
jYEHUjA
UVnH tH jU
jlWEHUjyA
UVnH tH %SBTBUB]BrBtBBBBBBBCCDDDDDD.D1DEDKDODPD`DaDbDcDdD E!E:E;EIILLLNNUNO{v{rnn5CJ5CJCJaJ
5CJ\CJjiCJEHUj{A
CJUVaJnH tH
6CJ]jfCJEHUj{A
CJUVaJnH tH CJ
jCJU
jdEHUj{A
UVnH tH jU
5\aJaJ
6]aJj6U]jb6EHU]*IIILL?MFMVM}M~MU$$IfTF4F6
064
Fa $$Ifa$$xa$$a$d@x7$8$H$7$8$H$ ~MMMMMMMMMNh$$IfTF4r
6064
Fa $$Ifa$MMMMMMMMMNh$$IfTF4r
6064
Fa $$Ifa$MMMMMMMMMNhN$$IfTF4r
6064
Fa $$Ifa$MNNNNNNN"NNl$$IfTF4r
6064
Fa $$Ifa$"N'N,N1N2N9N>NCNHNNp$$IfTF4r
6064
Fa $$Ifa$HNMNNNNNNNNG$xa$$$IfTF4r
6064
Fa $$Ifa$NNNNNNN}ttttt $$Ifa$$$IfTF4F6
064
FaNNOOOOOWhNNNNN $$Ifa$$$IfTF4r
6064
FaOOO!O&O+O0O1O6OWhNNNNNWhN $$Ifa$$$IfTF4r
6064
Fa6O;O@OEOJOKOSOYO_ON$$IfTF4r
6064
Fa $$Ifa$_OeOjOkOqOvO{OOONl$$IfTF4r
6064
Fa $$Ifa$OOOOOOOOOWpNNNNNWG$xa$ $$Ifa$$$IfTF4r
6064
FaOOOSSSSSS*T9TJTKTSTTTUTVTWTXTYTZT[T\T^T_T`TaTcTdTeTfTgThTlTmTnToTpTqTrTsTtTuT}T~TTTTTTTTTTTTTTUUUUUU[)[*[H\I\\\ jUCJaJ
5CJ\CJaJ jCJaJCJaJCJaJmHsHCJaJCJCJ5CJCJ
5CJ\CJEOOSSSSSSSTT
TTTTT T%T*T/T4T$d$If\$a$ d$If\$d$5$7$8$9DH$If\$$
)x7$8$H$]^a$d@x7$8$H$4T9T=T@TDTJTKTSTUTWTYT[T]T^T`TbTcTeTgTiTjTkTlTnTpTrTtTx$IfFf*l$d$If\$a$tTuT}T~TTTTTTTTTTTTTTTTTTTTTT$
)7$8$H$]^a$Ff@sx$IfFfoTTTTUUU'U2U*6]
j@{CJUjX@
CJUV
jCJU>*CJCJ
5CJ\5B*CJ\ph5B*\ph jyUjAA
UVaJ jU jvUjA
UVaJ,=bVbcc%cccccccccHdIdJdZd[d\d]d^dddedudvdwdxdddddddddde)e*e:e;e*CJ5\1ccHdId^dZe[effHg]g^ghh1h2h&j;j6k7klll+l,lmmoo $d@xa$YeZe[ejefffggBgFgHgIgYgZg[g\g^gsghhhh.h/h0h1h2hLh^i_i`icihiiijimiiiiiiiiiiiiiii&j'j跰
j&EHUj#A
UV
jߘEHUj"A
UV jU 6H*]>*CJ
jCJUj@
CJUVCJjUCJEHUjzA
CJUVaJnH tH
jCJU6]CJ>*5B*CJ\ph5B*\ph4'j7j8j9j:j;jAjBjfjjj6k7knklllllllll'l(l)l*l+l2l4lZl]lllllllllllll m
mCmGmǾ}v
jEHUjA
UVhmHsH
jEHUjA
UVhmHsH jUjYCJEHUj=A
CJUVaJnH tH j{CJEHUj=A
CJUVaJnH tH >*6]
jCJUjCJEHUj{A
CJUVaJnH tH CJ.Gm{mmmmmmmmnoo
pppp[r\rlrmrnrorprrrrrrrrrrrrrtttttuuuuuuvvvz
jvEHUjA;A
UVnH tH
6>*CJ]5B*\ph
jxEHUj!A
UVnH tH
jzEHUjA
UVnH tH jCJEHUjA
CJUVaJmHsH
jCJU5\>*CJ
jEHUj2A
UV jU0opp[rprttvvww7y8y~~\q.IJUV$a$$a$d@x $d@xa$vvvvvvvvvvvvv?wCwww
x7y8yYy~~>~./BCDEKL_`abcdloruܾܾܾ~
jEHUjA
UVnH tH
jEHUj*A
UVnH tH
jMEHUjޯA
UVnH tH
6>*CJ]>*
jAEHUjA
UVnH tH jU6]CJ
jCJUjCJEHUjA
CJUVaJnH tH 1\]mnopǁȁɁʁ,.KHIJֆ?Qԉ9Ћx#Vn[yӔvkǾCJmHsH
6CJ]
5CJ\5B*\ph>*>*CJ5B*CJ\phjCJEHUj&8A
CJUVaJnH tH jCJEHUj{A
CJUVaJnH tH CJ
jCJU jU7VÈtg(όQnْ[xCg$
&F
7$8$H$^a$bdfhjlnЙҙԙ
$a$7$8$H$ $7$8$H$a$x$a$$
&F
7$8$H$^a$Jxn~ҙԙ ;<?CJaJCJB*CJ0aJ0phB*CJ0aJ0mHphsHB*CJaJmHphsHB*CJaJmHphsHCJ
j 0J !*;<=>?x $7$8$H$a$7$8$H$$a$ ,1hP/ =!"#3$%DdJ
CA?"2q4gڍg=?IMD`!E4gڍg=?I`:Hxcdd`` @c112BYL%bpu8N~(S`! 5p>8N~@Hxcdd``> @c112BYL%bpu:]b mĕ<c1䂺CP0y{aĤ\Y\q1(2tABN'anEDdJ
CA?"2L5p>8N~(=?`! 5p>8N~@Hxcdd``> @c112BYL%bpu:]b mĕ<c1䂺CP0y{aĤ\Y\q1(2tABN'anECDdXJ
CA?"2u;*FrxQ'?`!yu;*FrxQ@GxڕRMK@};[[)xۣ $^`
Ec5'{!_٣ wBAݰd7;
@hׂU2EiɎjIT]Q;6KXE$'zeTd>^cT%ap'
L8mshm7Џ5_UPRӻUJ9~7W<)_Y2#Y^y}_g}YVFQ^xRs#lj+@Lj[!iY+7W&JeQ38QDkx8
bUZDl`DdXJ
CA?"2*dxڕJ@gf[mcKXKP"QWbGʔXDꖝ~CS22ofbڭrJwYC7ߐ0lD+ay>gv$$IfFּ"'
!}#%
t06PPPP4
FaF$$IfFּ"'
!}#%
!"#$%&'()*+,./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{}~0Root Entry
FU@Data
WordDocument.*ObjectPool!@\b[Q_1103373227F@\bieOle
CompObjfObjInfo
#$%&'()*+,./0123456789:>CFGHIJKLMNOPQSTUX[\]^_`abcdefgijkloruxyz}
FMicrosoft Equation 3.0DS EquationEquation.39q .@&&MathTypeSymbolw@
BCwCw w02
(yOlePres000 &Equation Native 5_1103901169F %s %sOle
Symbolw@
6CwCw w02
{)y :
Times New RomanCwCw w0 2
SNRe2
siN2
;yN2
iN2
xN2
{TiN2
{lxN 2
@SNRe Times New RomanCwCw w02
NN2
iN2
NN2
iN2
dB Times New RomanCwCw w02
102
I102
202
102
5202
10Times New RomanCwCw w0 2
loge2
102
)02
(02
C)02
M
(02
{)02
{(0 2
loge2
m10Symbolw@
ECwCw w02
=02
02
n=0Symbolw@
9CwCw w02
Q
02
0 Symbolw@
FCwCw w02
=02
!
=0
&
"System
!)BwHܴ
x"2x
FMicrosoft Equation 3.0DS EqCompObjfObjInfo!OlePres000
"&Equation Native ;uationEquation.39q .@&&MathTypeSymbolw@
BCwCw w02
(ySymbolw@
6CwCw w02
{)y :
Times New RomanCwCw w0 2
SNRe2
siN2
;yN2
iN2
xN2
{TiN2
{lxN 2
@SNRe Times New RomanCwCw w02
NN2
iN2
NN2
iN2
dB Times New RomanCwCw w02
102
I102
202
102
5202
10Times New RomanCwCw w0 2
loge2
102
)02
(02
C)02
M
(02
{)02
{(0 2
loge2
m10Symbolw@
ECwCw w02
=02
02
n=0Symbolw@
9CwCw w02
Q
02
0 Symbolw@
FCwCw w02
=02
!
=0
&
"System
!)Bw!
2x
FMicrosoft Equation 3.0DS EquationEquation.39q_11033732785F@z@zOle
<CompObj=fObjInfo?Equation Native @_1105517514FOle
ACompObjBfo
2x
FMicrosoft Equation 3.0DS EquationEquation.39q8" x .1@&&MathType ObjInfoDOlePres000E*Equation Native R_1105517471wF@@Times New Roman2
kTimes New Roman2
FHSymbol2
m2
, m2
$mSymbol2
V=2
=2
=Times New Roman2
@
..........2
*: Times New Roman2
22
12
v0
&
"System\ܰEquation Native E_1103226933AFܺܺuationEquation.39q!)X*<
i=1,..,N
FMicrosoft Denklem 3.0DS EquationEquation.39qOle
CompObj@CeObjInfoOlePres000BDbaa: .@@&%e&MathType Symbolw@E
UwUw02
y
&
"System
0G
"Equation Native )_11039200100PGFOle
CompObjFHf
FMicrosoft Equation 3.0DS EquationEquation.39q!/P
,N
wT
F+b=0
FMicrosoft Equation 3.0DS EquationEquation.39qObjInfoIEquation Native K_1103920031LF00Ole
CompObjKNfObjInfoOlePres000MOEquation Native C . &&MathType Symbolw@N
}
UwUw02
](y Symbolw@:
UwUw02
]1)ySymbolw@N
~
UwUw02
][ySymbolw@:
UwUw02
]]yTimes New Roman
UwUw02
`ky2
`hiy2
`by2
`xy2
`wy2
`ayy Times New Roman
UwUw02
iy2
"iyTimes New Roman
UwUw0
2
`,....2
`;2.2
`,.2
`;1.2
`!
,.2
` 1.2
`q..Symbolw@:
UwUw02
`5
=.2
`.2
`+.
&
"System
0!&4
gi
wFi
()+b[]e"1,'
'
i=1,2,....N_1103920040J+RFP`P`Ole
CompObjQTfObjInfo
FMicrosoft Equation 3.0DS EquationEquation.39q=N .&k&MathTypeUU22H OlePres000SUvEquation Native _1104003592EXFOle
rH r
r
r
`i`dTimes New Roman
UwUw02
n wy2
[by2
' xy2
wy2
xy2
@by2
wy2
7dySymbolw@U
i
UwUw02
3
+y2
8=yTimes New Roman
UwUw02
.y2
a)y2
;y2
,y2
#(y
&
"System
0!v/t
d(w,b;F)=wT
F+bw
FMicrosoft Equation 3.0DS EquationEquation.39qCompObjWYfObjInfoZEquation Native U_1104003649]F98
x(i),i=1,..N
FMicrosoft Equation 3.0DS EquationEquation.39q98\
y(i),i=Ole
CompObj\^fObjInfo_Equation Native U1,..N
FMicrosoft Denklem 3.0DS EquationEquation.39q_18<
SNRseg=10Mlog10
x2
(i)x(i)"_1084512280bFCCOle
CompObjaceObjInfodEquation Native M_1103264474gF0^Ole
CompObjfify(i)()2
()i=NmNm+N"1
"m=0M"1
"
FMicrosoft Equation 3.0DS EquationEquation.39q ObjInfoOlePres000hj&Equation Native _1103226925mF
!"#$%&()*0123456789:;<=>?@ABCDEFGHJKNQRSTUVWXYZ[\]^_abcfijklmnruvwxyz~.@&&MathTypeSymbolw@
BCwCw w02
(ySymbolw@
6CwCw w02
{)y :
Times New RomanCwCw w0 2
SNRe2
siN2
;yN2
iN2
xN2
{TiN2
{lxN 2
@SNRe Times New RomanCwCw w02
NN2
iN2
NN2
iN2
dB Times New RomanCwCw w02
102
I102
202
102
5202
10Times New RomanCwCw w0 2
loge2
102
)02
(02
C)02
M
(02
{)02
{(0 2
loge2
m10Symbolw@
ECwCw w02
=02
02
n=0Symbolw@
9CwCw w02
Q
02
0 Symbolw@
FCwCw w02
=02
!
=0
&
"System
!)BwtL
SNR=10log10
x2
(i)i=1N
"
x(i)"y(i)()2i=1N
"
FMicrosoft Denklem 3.0DS EquationEquation.39q
$!"#%'&()*+,.1/2I35476;89:<=?>A@BGCDEFJHNaKLMORPQTSYUVWX[Z]\^_b`ecdgfihjlkmnorpqsvtuxwy{z}~
t06PPPP4
FaF$$IfFּ"'
!}#%
t06PPPP4
FaF$$IfFּ"'
!}#%
t06PPPP4
FaF$$IfFּ"'
!}#%
t06PPPP4
FaF$$IfFּ"'
!}#%
t06PPPP4
FaF$$IfFּ"'
!}#%
t06PPPP4
FaF$$IfFּ"'
!}#%
t06PPPP4
FaF$$IfFּ"'
!}#%
t06PPPP4
FaF$$IfFּ"'
!}#%
t06PPPP4
FaF$$IfFּ"'
!}#%
t06PPPP4
FaF$$IfFּ"'
!}#%
t06PPPP4
FaF$$IfFּ"'
!}#%
t06PPPP4
FaF$$IfFּ"'
!}#%
t06PPPP4
FaF$$IfFּ"'
!}#%
t06PPPP4
FaF$$IfFּ"'
!}#%
t06PPPP4
FaF$$IfFּ"'
!}#%
t06PPPP4
FaF$$IfFּ"'
!}#%
t06PPPP4
FaF$$IfFּ"'
!}#%
t06PPPP4
FaF$$IfFּ"'
!}#%
t06PPPP4
FaF$$IfFּ"'
!}#%
t06PPPP4
FaFDdhJ
CA?"2Pi]
wzT
1U?`!Pi]
wzT
@_xcdd``> @c112BYL%bpuPaE`Eacq)}O\40`a
rt:M\4
hj``s#RpeqIj.ŠQs>1h:(G5f,@DdTJ
CA?"2Ȓ9S
ߜqE!W?`!Ȓ9S
ߜqE! `0xڥJAg.%K
<D@Rca&pj gXhiJVV6yQDݽ#oo
q`+FM%G}3q?Kq)ElzJeSRZ?B[j[;1\.NyU^Vذcn`dLP3w4n'aߦQcq5䫘3豧7=PW;:˰]
PQyAbˮk,7E]<xfy+aN\q.kPO1R̅u")"?W}_R`Ӵ}DYy3ೄkr!e54>3\?
:9)CBFӲi0 '^0]Y]9[KChDdJ
CA?"2Z6,K@Z?`!Z6,K,``0lxcdd`` @c112BYL%bpuoCuՏF{\?`!s7u>oCuՏF`X
0Axcdd``vbd``baV d,FYzP1n:&v! KA?H1Z깡jxK2B*R
vfRv,L!
~
Ayb'Xr:]b mї
b^?$@W&0;EqYc7L
27waB;0@2X{$i8ltC4}њd/P_\8[v0o8021)Wx\]@0Ӊrt_2e,[mDdhJ
CA ?" 2p$jV*#L^?`!D$jV*#*@xcdd``ed``baV d,FYzP1n:&! KA?H1Z,πqC0&dT20$ KXB2sSRs1XP] Hq50`hVOׁM@J}
a1d#7JG#_=HE:?l`*y)=ZĤ\Y\b2B16?2)jNς`sX^`?`!V>)jNς`sX $xcdd``fcd``baV d,FYzP1n:&B@?b u
UXRY7S?&meabMVKWMcs>Q.
N300si#"NJ_ϬD7$.N@":zz:0D@a 6I9@l\y#6pXqAÍ`@CM
F&&\AD,ĠtW`*XSDdJ
CA?"20Y8whyŒ0T>c?`!Y8whyŒ0T>;@@hhxcdd`` @c112BYL%bpuQ.
NUsi#@Z
*t!@penR~CP@y`+YAFVY 6=$b#ܞf Ip{@78߁/ګ<{lBH%4v1h0y{qĤ\Y\q1(2tABN'mDdJ
CA
?"
2n"z^$'Ag?`!n"z^$'Af Ptdxcdd``6fb``baV d,FYzP1n:&a! KA?H1wzjxK2B*R1
@2@penR~CPkry V8p(TݫQIy2\c̽2Bjܹ;!2B͍gvo4QH{rAFBm5$47$37X/\!(?718&0u Ly@L8 2HOwK?yB0pd>?Ԯ0y2Hou !U^"ŭҀ3qat7p{9@p~7+͑\\"h~``9#RpeqIj.ŠQs>1h:(G7f,mDd4J
CA?"2 yF}ѥ,]`i?`!yF}ѥ,]`
@PpxڥS1K@~&m1""^EbMDIl)8Z(X+(l"NpvtX*.`w}_{0X 4bbZ!d,D hZRǥtF A@t+yc0&gN*jg
aDH[HGBYoҞLHk4H+ij>>?x^ΘNQ?[uo70vbr̅ga@g7̜M#Iil_d7;7fnܱ@Sy7(/>+^_ڈ){Z!%o}Qu$Ĵ'>ύ(}K"cS`[ri7!ӳz
h`Z_3P\m=0dԋ$$IfFּU(a c.
9RDUvF n!#D%
t06PPPP4
FaF$$IfFּU(a c.
9RDUvF n!#D%
t06PPPP4
FaF$$IfFּU(a c.
9RDUvF n!#D%
t06PPPP4
FaF;Dd@J
CA?"2Mo12#
Nyw?`!qMo12#
NLe ?xcdd``Vcd``baV d,FYzP1n:&B@?b u30
UXRY7S?&meabMVKWMcs>Q.
N300si#l ~0&?ίeD03d?# 5Kdx:0>e:X\
[ܤrP\pr0C:3[.hsAh0y{Ĥ\Y\q1(2t94N`c:Dd@J
CA?"2Y͞$hxJy?`!pY͞$hL >xڕQ=KP=%H0t"ѡXA"X!`X6g'HR}חE7ǹ\XJ̑ !,csW+tuBi1cB
0b5vEpwy#~à 靣7M)S
l&:tnⱬ3*
K~oO~͕Rcޖ,Bay[rIm^+M6ApE""A7O
TYBƽRы?@H7}}xfg$K}`O$VFHZ}nDcrn_KGu4",;Z)]#9)oւX`%8ǙֹsV ?ОWy0=mӁ8ϕ1\fx7<ǂA0Ž8UaU
Tȅe0kEt{Jh12)rK:Dȯg_u
X)0ƦG,īH@u
G,s^y๘gD ڒ5v*
=Ʊ[Ay;F)Ŏ~G[Q
G?)Ƃz'G{T>?YӮRcUY!fu}b%h_q
!~0WemE:܁)SZ4S&E?8zDx0>\
gsѧW`?fN.Dd(J
CA?"2f\:LlO0wl~?`!df\:LlO0w@x2xcdd``>$d@9`,&FF(`TɁ【 A?dm7vOle
CompObjloeObjInfoOlePres000np\4 .&>&MathType@@3@ @Symbolw@
UwUw02
zy Symbolw@
UwUw02
*=ySymbolw@
UwUw02
(y2
y2
!y2
y2
(>y2
>y2
!>y2
>y2
+y2
y2
=y Times New Roman
UwUw02
Ny2
iyTimes New Roman
UwUw02
iy2
Kyy2
iy2
dxy2
=iy2
yy2
Diy2
xy2
Ny2
CyTimes New Roman
UwUw02
)y2
(y2
)y2
(y2
))2
()2
),2
(, 2
min(2
*i2
2i2
1i2
1i
&
"System
0M$
C=1NEquation Native '_1103226905`}sFppOle
+CompObjru,f1"2*min(x(i),y(i))x(i)+y(i)()i=N
"
FMicrosoft Equation 3.0DS EquationEquation.39q9: ObjInfo.OlePres000tv/bEquation Native I_1104003994[yF.@@&&MathTypeSymbolw@
rCwCw w02
[ySymbolw@
CwCw w02
+]ySymbolw@
sCwCw w02
[ySymbolw@
CwCw w02
]
]ySymbolw@
tCwCw w02
{[ySymbolw@
CwCw w02
]ySymbolw@
uCwCw w02
y Symbolw@
CwCw w02
=ySymbolw@
vCwCw w02
+y2
y2
=y Times New RomanCwCw w02
py2
,my2
?xyTimes New RomanCwCw w02
ny2
uy2
Gy2
@my2
ny2
xy2
Amy2
ay2
any2
Lxy Times New RomanCwCw w02
A1yTimes New RomanCwCw w02
Z)y2
(y
&
"System
!)Bw
xn[]=a(m)xn"m[]+Gx
un[]m=1p
"
FMicrosoft Equation 3.0DS EquationEquation.39qOle
LCompObjx{MfObjInfoOOlePres000zP2 .&&MathTypeOSymbolw@
jCwCw w02
y2
y2
y2
y2
y2
y2
y2
y2
l=y Times New RomanCwCw w02
] Ty2
iyy2
i yy2
i@yy2
F
Ty2
Rxy2
R xy2
RAxy 2
@XLLReTimes New RomanCwCw w02
>
aL2
RL2
iaL2
3
aL2
RL2
taL2
FDLTimes New RomanCwCw w0 2
loge
&
"System
!)BwEquation Native `_1103226907FP%P%Ole
dCompObj~efy,L
LLR=logaxT
Rx
ax
ayT
Ry
ay
()
FMicrosoft Equation 3.0DS EquationEquation.39qObjInfogOlePres000hEquation Native o6_1103226908qF "/ "/{ .@&&MathType` Times New RomanCwCw w02
xyTimes New RomanCwCw w02
:ay
&
"System
!)Bw
axOle
pCompObjqfObjInfosOlePres000t
FMicrosoft Equation 3.0DS EquationEquation.39q .`&&MathTypep Times New RomanCwCw w02
yyTimes New RomanCwCw w02
:ay
&
"System
!)Bw'
ay
FMicrosoft Equation 3.0DS EquationEquation.39qEquation Native {6_1103226909F<<Ole
CompObj}fObjInfoOlePres000Equation Native 6_1103226910F GI{ .@&&MathType` Times New RomanCwCw w02
@xyTimes New RomanCwCw w02
FRy
&
"System
!)Bw(
RxOle
CompObjfObjInfoOlePres000
FMicrosoft Equation 3.0DS EquationEquation.39q4 .`&&MathTypep Times New RomanCwCw w02
JyyTimes New RomanCwCw w02
FRy
&
"System
!)Bw8mg
Ry
FMicrosoft Equation 3.0DS EquationEquation.39qEquation Native 6_1103264496F S SOle
CompObjfObjInfoOlePres000Equation Native _1088154099F\\H= . &c&MathType``5``$``Symbolw@
wCwCw w02
Jy Symbolw@A
CwCw w02
`ySymbolw@
xCwCw w02
Ky2
y2
Ay2
y2
Ky2
y2
Ay2
y2
ry2
+y2
=y Symbolw@A
CwCw w02
py2
`RpySymbolw@
yCwCw w02
pyTimes New RomanCwCw w02
2y2
[1y2
e)y2
(y2
)y2
(y2
)y2
(y2
v
)y2
(y 2
logeTimes New RomanCwCw w02
dw2
jww2
Yw2
ww2
Xw2
ww2
Xw2
{ ww2
Yw2
FDw Times New RomanCwCw w02
`IS
&
"System
!)Bw7
IS=logY(w)X(w)+X(w)Y(w)"1()dw2"
+"
FMicrosoft Denklem 3.0DS EquationEquation.39qOle
CompObjeObjInfoEquation Native QH
COSH=12Y(w)X(w)+X(w)Y(w)()"1[]dw2"
+"
FMicrosoft Equation 3.0DS EquationEquation.39q_1103226914FffOle
CompObjfObjInfoOlePres000Equation Native e_1103226915kF`8opOle
] Y .`&@&MathTypepSymbolw@$
CwCw w02
b=yTimes New RomanCwCw w02
)y2
,y2
,y2
$(yTimes New RomanCwCw w02
omy2
cy2
cy2
:dy Times New RomanCwCw w02
Lyy2
hxy
&
"System
!)BwITM
d(cx
,cy
,m)=
FMicrosoft Equation 3.0DS EqCompObjfObjInfoOlePres000Equation Native 0uationEquation.39q k .&&MathTypeSymbolw@"
CwCw w02
[ySymbolw@+
CwCw w02
]ySymbolw@"
CwCw w02
[ySymbolw@+
CwCw w02
]yI Times New RomanCwCw w02
;2y2
1y2
1y2
XF2y2
X9 2yTimes New RomanCwCw w02
Y)y2
(y2
)y2
m(y2
:2y2
L)y2
0y2
(y2
)y2
0y2
(ySymbolw@"
CwCw w02
y2
y2
0y2
@y2
@y2
0@y2
y2
+y2
ySymbolw@+
CwCw w02
yy Symbolw@"
CwCw w02
=y Times New RomanCwCw w02
Ly2
&ky2
cyy2
xy2
hyy2
xyTimes New RomanCwCw w02
ky2
cy2
ky2
cy2
cy2
(cy
&
"System
!)Bwm<
cx
(0)"cy
(0)[]2
+2cx
(k)"cy
(k)[]2k=1L
"
[])12
FMicrosoft Equation 3.0DS EquationEquation.39qw  _1103264512F`m{Ole
CompObjfObjInfoOlePres000 Equation Native _1104035270FOle
!"#$%&')*0123456789:;<=>?@ABCDEFGHIJKMNOPQTWZ]^_`abcdfinqrstuvwxyz{}~.@&&MathType rSymbolw@n
CwCw w02
y2
y Symbolw@T
%CwCw w02
=y2
h=ySymbolw@n
CwCw w02
`=y Times New RomanCwCw w02
kMy2
!my2
My2
my2
yy2
xy 2
OcepsTimes New RomanCwCw w02
~eme2
~ we2
me2
ce2
ce2
de2
me2
}we2
FDe Times New RomanCwCw w02
3 1e2
1eTimes New RomanCwCw w02
~)e2
~
(e2
)e2
6,e2
Y,e2
(e2
)e2
y(e
&
"System
!)Bw<
CD=w(m)d(cx
,cy
,m)m=1M
"
w(m)m=1M
"
FMicrosoft Equation 3.0DS EquationEquation.39q k .@&&MathType * /CompObjfObjInfoOlePres000Equation Native (/Symbolw@
UwUw02
y Symbolw@
8
UwUw02
=ySymbolw@
UwUw02
y2
=y Times New Roman
UwUw02
NNy2
wy2
yy2
xyTimes New Roman
UwUw02
wy2
wy2
SNy2
@Sy Times New Roman
UwUw02
1y2
o2yTimes New Roman
UwUw02
#)y2
(y2
)y2
^(y2
1y2
1ySymbolw@
;
UwUw02
qy2
qy
&
"System
0L
SP=1Nx
(w)"y
(w)2w=1N
"
FMicrosoft Equation 3.0DS EquationEquation.39qN* o _1104035275FwwOle
+CompObj,fObjInfo.OlePres000/Equation Native Lu_1105519128F0&0&Ole
R.`&& &3&MathType@i@1 O 1O[T%T[$%$Symbolw@K
UwUw02
%y2
%y2
%y2
y2
y2
y2
9 y2
y2
+y2
y2
8=ySymbolw@
q
UwUw02
zy2
y Symbolw@K
UwUw02
+=y2
=y Times New Roman
UwUw02
Ny2
wy2
jNy2
wy2
yy2
xyTimes New Roman
UwUw02
"wy2
I!Yy2
]wy2
Xy2
:wy2
wy2
Ny2
@Sy Times New Roman
UwUw02
1y2
$2y2
1y2
2yTimes New Roman
UwUw02
#)y2
\"(y2
b)y2
(y2
{*y2
)y2
1y2
@(y2
?)y2
(y2
)y2
z(y2
*y2
1y2
#2ySymbolw@
t
UwUw02
ly2
qy2
qy2
ly
&
"System
0Yd
SPM=1N*x
(w)"y
(w)2w=1N
"
+(1")*X(w)"Y(w)2w=1N
"
()
FMicrosoft Equation 3.0DS EquationEquation.39q\(ItI
x
(w)CompObjSfObjInfoUEquation Native VD_1105519123FԧԧOle
XCompObjYfObjInfo[OlePres000\8
FMicrosoft Equation 3.0DS EquationEquation.39qh .1`&&MathTypepTimes New Roman2
C)2
(Times New Roman2
Hw Times New Roman2
ySymbol2
q
&
"System\(dII
y
(w)Equation Native eD_1103226930FuuOle
gCompObjhe
FMicrosoft Equation 3.0Equation.3Equation.39qM
x>w
FMicrosoft Equation 3.0DS EquationEquation.39qObjInfojEquation Native k)_1105519135FppOle
lCompObjmfObjInfooOlePres000p4Equation Native .1@@&&MathTypeSymbol2
^[Symbol2
]Symbol2
Symbol2
=Symbol2
2
n= Times New Roman2
K2
i2
y2
xTimes New Roman2
i2
S2
i2
S 2
FBSD Times New Roman2
12
_2Times New Roman2
~
)2
(2
2 )2
=(
&
"System\ܘI?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abdefhijlmn.@&&MathTypeSymbolw@
CwCw w02
} [ySymbolw@
oCwCw w02
}u]ySymbolw@
CwCw w02
Xy Symbolw@
pCwCw w02
=ySymbolw@
CwCw w02
y2
#=y Times New RomanCwCw w02
362
o162
26Times New RomanCwCw w02
)62
(62
&
)62
(62
~ )62
((6 Times New RomanCwCw w02
k62
y62
0x6 2
KWSSDTimes New RomanCwCw w02
kS2
VS2
ZkS2
6
VS2
kS2
,wS2
FDS
&
"System
!)Bwl
WSSD=w(k)Vx
(k)"Vy
(k)[]2k=136
"
FMicrosoft Equation 3.0DS EquationEquation.39qF{ .@&&MathType` Times New RomanCwCw w0 2
KWSCompObj
fObjInfoOlePres000 Equation Native 5SDTimes New RomanCwCw w02
FDS
&
"System
!)BwK
WSSDOh+'0 0<
Xdp
/Audio Steganalysis Using Audio Quality MĒʂT 3{vL@(\P.\*g`0``Usi#W\ Ę5y`a #4+A_L~P0)rX@V
2v70Au=6NO+!L>:_ajK?>ĞIЅɌl? Q=!
~
Ayu?~0ePW7/sCK;0'lo?_ /A/ʯfGfrsBR
B
8WAXσs,2x3
!?`*(
B8D5
`ҀxQ5x嫙#W3mB%+(DsSaiԃ.h).aN.+KRsA<.E.bӉrPga!Dd
J
CA?"2ْrNӂ\Hpxj_&?`!WْrNӂ\Hpxj58\ %xڥOAmmiR
l3DoF&l $u=qWŘPcУIcM3q{=)(V5?7(.GKC;Vċ~Β$UYS*C'0(8kH=w{7{/ț@{=EIО7$1y7]7Q7PzA;5)d˰>)]PȕƢظV렵l:(j\Ɯ
I˿Q;eEv6+SFD2&Q%bu1/$''Dd J
CA?"2YO<ǽ
Τ,Z)?`!YO<ǽ
Τ,Z8xڝK@߽DK"""
E:]*X+:;n8ԭtw(WB#]4~ݻ\a~ufb!!*"2d@C]HhbH2n肵zBgIJY;.~t aܽXLGi:?d+[ͳ_lz#ٍ0rsM[c[ߨc.]Jf7#]C5γTcA=WgJejTm\SXWI4kZ\Gq
l,
9@UY @c112BYL%bpǔ?`!%)} ^P=>`0xcdd``> @c112BYL%bpuQ.
N300si#`g#7ъdBb%\`2M% #RpeqIj.ŠQs>1h:(b#3DEDd,hJ
CA?"2Ql^sU%t?`!%l^sU%tƓ@xcdd``> @c112BYL%bpu @c112BYL%bpuƍP4\
s(xY$#cIu/M(_u'T^Xsjr!yQכܰ?HN苲ؗ7
M6OyCy.jO$ηs
6ӝ Ka\vf`gםsZ Un1#HN[#i#Ykַk֟qWζ.}&N4%
<\+!_T{#:_Y5UǬ/nOY~5=yVyLj~%$noZculLW6Vs2)bXS?yț$Gp͈Gֈwgq1̟sGE0K؇2e#5?#V,G05;9x{t*!V_³]=tC=n7s>;<^W
,
GDdOBJ
CA?"2%GjrF#?`!}%GjrF`*
0Kxcdd``dd``baV d,FYzP1n:! KA?H1Zl@P5<%!`35;aR& Ma(w
prs>Q.
N300u3q50&Vb1h^Hfnj_jBP~nbv$s;@FhVOY@8*_n`$Pw0ݑ
2!nO3`D[1)Pw('J?J.hrS8ءq+F&&\Abt:QiCu_Dd BJ
CA?"2(&``A(/&j?`!(&``A(/&&8cxcdd``f2
ĜL0##0KQ*
W$,56~) @ k;jUXRYlo 0d)
W&0;89(b2Usi#.Y ѕĘɵy`W9@FV 1nN^ܻX>;@q&ɰXfϸ^70ZZǀ
Lh~h2¨p@7YxLnb#,9!.X0Aݥ JY׀[q#SهU};os?U}D=cKkj7owWpkz% ./k#
"ׁfⳒ/8>VbIk1qϲdY`s}@
φAJ8"_gʿ`ȣFsg
pw~TyA>Ty
FZpk pv`pĤ\Y\q1(2t?4N13Dd
(2J
CA?"2XӸie.=_4ɞ?`!,Ӹie.=_@8xcdd``.f2
ĜL0##0KQ*
W d3H1)fY?{vĒʂT@_L ĺEX2X@V~;3Q9@& Ma(w
dr)9(bf222¿\gaajbO?P=LP{X3!p?HhnHfnj_jBP~nbqN0PdL,p>7_܉:\ֱ)qP~']p~8*8R7vTXo`ceg,釂^v,EX!,釢;M_.0*#71U:pnTeNT A习%{7v0oadbR
,.IAS1h:(GCf~8J͈Dd
J
!
CA ?"2@7JivqqO?`!7JivqqO@(xڥTK@~&nvjh!Z(`q'],؊ʂq.,hiac'W]p6Ǖz\r$1/Lf& Pt0BBaҚd}=bA[9aL&o
9aLV2@@5M cNVGW
G;g̎VVt5ZEQb2ERXc!3p}̼P+VK$BQ~F
"veH(BE(`1ҒEm"v:E%(`]~CZW* G$p_d ;>+/~ZؚUgX֙h:}#ȷ͝;ϝ(.w_D,jbտ͛)fr5&gpYvںSll0J~;wO=zrb{$U_jA>"A5i4soU]z5~y\/zfwF#IS
YbXr[PQs~!G2YCzϭw^CZqDdhJ
"
CA!?"!2gbgfֽyCO?`!;bgfֽy@@2 xcdd`` @bD"L1JE`xX,56~) M@ k`f`x
UXRYvo`0L`A $37X/\!(?71a!%@y mhR f50Ld$`"#؏LjbMb5#/XO8<I9L\"_
;.p j;#+KRslMDdJ
#
CA"?""2fp5SN2BT?`!:p5SN2`@20xڍQJBQ=3OMV6nt~@zi(
.?mfE(r3sΜK8
kT*UViљUʿ}5g%,qMp/%zSCg0q
M&od*ΒQ6x^UISrs?HFH6?9L? ӊN'7?64l˳xA_3)p:Y? ^>"˝LDd>
$
#A#"#2JeFǇF&X?`!eFǇF_`!xcdd``^ @c112BYL%bpuxw,o}3ˮ@ DZr.G4GQ(vtqĔ\ D%Wwo(N6sTh3{\cW?#9gke:w$QwՃ1^(D}U;Z&DԜ77$j/5TEu_ljqzj+W\ٺ_wZVa7a'qΣ=N
`exyS5VՑSѮ{_#]B.ϸE_ /8ks/U&oIB#Ʀ@ZdCs;yx^Dd0J
&
CA%?"%2`TUð#vvc<?`!4TUð#vvc
`k0xcdd``fed``baV d,FYzP1n:f! KA?H1@P5<%! `35;aR&br<@V`$u mĵq=HF vF%$? c'nG?y0@a$6j\%"F&&\HX,MRDd0J
'
CA&?"&2`=Ē1Fsf@p:<?`!4=Ē1Fsf@p:
khxcdd``fed``baV d,FYzP1n:f! KA?H1@P5<%!`35;aR&br<@V`$u mĵq=HF vF%$? c'nG?y0@a6j\%"F&&\HX,REDdnJ
(
CA'?"'2Fɛ_giXR{?`!{Fɛ_giXR{4hIxcdd``dd``baV d,FYzP1n:B@?b 30sC0&dT20 KXB2sSRsĜOS,@u@@ڈk ;Hf%/kۘP0Bg$~F03fvLy;;``j`PY KML@(\PuX0Э.8`padbR
,.I `g!M~bidu2Dd$ J
)
CA(?"(2RTNe>Ձ%?`!RTNe>Ձ%@xcdd``~ @c112BYL%bpu! KA?H1Z10sC0&dT20 KXB2sSRs1Xr8Wc
/HFW5oNT !φyyt2^F0̆#
G3P9XQ5P@0;N;6K1@penR~CP@Ѕ;
?`m+ B"ELLJ% 1.Gf8v%DdOJ

CA,?",2 W"BvSEPc9?`![ W"BvSEPj`0,
0)xcdd``6ed``baV d,FYzP1n:! KA?H1Zl@P5<%!`35;aR&br<@V`$uF\
'iXA*5f/@jt0F0])l p{X@08 <
sPw<`tG6B+\[
ELLJ% 1.,#cf8\DdJ
.
CA?"23xR1_i;d^?`!xR1_i;d@xcdd``cb``baV d,FYzP1n:&! KA?H1>2@0&dT20<i`7LYI9@,V01@pT9P\Hq2rFWc^)+W0v$s {/~wr'\F7:YAF<{Cs{fC6Pspoqm`R[ ㇰ ۃ~=!
~
AyĹVP26g?PF>aHx4p~8/<#zNTH7M6ԞL`wV ~F0R>60ap~7:cTXv܌'e 8b;\021)Wx\]` b!MrDdJ
/
CA.?".2?@,r\Vi*/?`!@,r\Vi*<@Hxcdd`` @c112BYL%bpu@2Title$a$5CJ\aJ,B@, Body Text$a$ZCZBody Text Indent7$8$H$CJOJQJ^JaJmHsHXS"XBody Text Indent 3$dh^`a$tH 0J@20Subtitle$a$5\NP@BNBody Text 27$8$H$B*CJaJmHphsHTQ@RTBody Text 3$7$8$H$a$B*CJaJmHphsHTObT
*Body of Paper*$5$7$8$9DH$a$CJaJh.U@q. Hyperlink>*B*phPbP
*Paper Title*$5$7$8$9DH$a$5CJ aJVOabV*Authors & Affils$dd[$\$`a$hF@F
Footnote Text5$7$8$9DH$CJaJ8&@8Footnote ReferenceH*>V@>FollowedHyperlink>*B*ph58Xn8Xn *78u+QR !"$XY,!!"""""###)#C#D#c######$$8$S$p$q$$$$$$$$$%%L%M%r%s%%%%%%%%%&&F'G'd'((((D*E*m*1+2++i,...///#/'/+/0/4/8/1?1D1E1G1I1K1L1M1N1O1P1Q1R1S1T1U1V1W1X1Y1Z1[1`1a1c1e1g1h1j1k1l1m1n1p1r1s1t1u1w1y1z11}11111111111111111111111111111111111111111111111111111111111111111111111111111111111222
22222#2'2+2/24292>2C2H2M2R2V2Y2]2c2d2i2j2l2n2p2q2r2s2t2u2w2x2z2{22}2~2222222222222222222222222222222222222222222222222222222222222222222222222222222223333333333333333333333 3!3"3#3$3%3&3'3w6x6T7U7t7;;@;=>O>d> ?!?:?;?AACCCFF?GFGVG}G~GGGGGGGGGGGGGGGGGGGGGGGGGHHHHHHH"H'H,H1H2H9H>HCHHHMHNHHHHHHHHHHHHIIIIIII!I&I+I0I1I6I;I@IEIJIKISIYI_IeIjIkIqIvI{IIIIIIIIIIIIMMMMMMMNN
NNNNN N%N*N/N4N9N=N@NDNJNKNSNUNWNYN[N]N^N`NbNcNeNgNiNjNkNlNnNpNrNtNuN}N~NNNNNNNNNNNNNNNNNNNNNNNNNOOO'O2O?@J_`astuvw000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
0
0
0
0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000@00@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@00@00@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@00@00@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@00@00@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@00@00@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@000000000000000000000000000000000000000000000@00@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@00@00@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@00@00@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@00@00@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@00@00@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@0@00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 0 0 0 0 0 0 0 0 0 0 0
0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 000@0
000000@00@0@0@00@0@00@0@0@0@00@0@0@0@00=.578>SBO\\=bYe'jGmv?ORZ^flqrQ(C))*L++4j55556+67W77777H88889%9I~MMMM"NHNNNO6O_OOO4TtTTGUlUUcoV?PSTUVWXY[\]_`abcdeghijkmnopstuvwxyz{}~>Q%68ART+<>((((((l8888888889%9'9
<< <#<7<9<A<R<T<<<<=>>O>`>b>HV\V^VVVV6YJYLY5[F[H['\8\:\]]]I^Z^\^d^u^w^^^^)_:_<_B_S_U_HaYa[abb/bcccccc&d7d9dffff'f)fffffff{ggg[lllnlllllllooopppppp.zBzDzKz_zazzzz\{m{o{{{{:::::::::::::::::::::::::::::::::::::::::::::::8.@
 (
HPF0*PF0*
lB
<PF0*PF0*DlB
<PF0*PF0*DlB
<PF0*PF0*D
fB
@
6PF0*PF0*DlB
<PF0*PF0*D~
BPF0*PF0*̙
lB
<PF0*PF0*D~
BPF0*PF0*̙
~
B PF0*PF0*̙
~
B
PF0*PF0*̙
B
TPF0*PF0*D?" B
S ?"6Nt tn7tN f&tjtNU UtU6UtUUtVU Ut
K
tuytGvGt
8=>EGMNVX^_fhmntu+6lx&2'3UberR`EJ^j
7EMV (jrbnCO+8F% .":"""^#a#z#}########$1$6$8$C$N$Q$S$Z$k$n$$$$$$$$$$$%%E%J%l%p%%%%%**h*k*****.
.///"/#/&/'/*/+///0/3/4/7/8/;/2B2C2R2U2Y2\2]2b2d2h2222222333334444455
66]6`699&;,;><@<~<<<<^=h=====>>:>D>F>I>p>z>????@@C@G@o@r@t@{@@@@@'C*CuCxC#F'FGFKFFFVGYGGGGGGGGGGGGGGGHHkHnHHHHHHHHHHHHIIIKIRIkIpIJ"J$J(J0J7JzJJZLhLMMMMMMMMMMMNNNN N
N
NNNNNNNNN N$N%N)N*N9N@HJPQS^ahirw~CEu+6.W%%P/T/U/Y/Z/^/1"1#1'1(1,1C2G2H2L2M2Q2333333:<@<<=>>CCFFFGUGHH*N.N/N3N4N8NNNTSYSyW{WPYUYYY]]]]^^c^aa^cac;d@d,f1fplulppFzKzq{v{{{46AQSUsu͋QW(.T
"=@IJ^arw3333333333333333333333333333333333333333333333333333 "#..///// /p/q/////////000000'3FFNHNHMNNNOOZX[Xbbffff[lolIIUUJ~%RRT{
!!"=??@@J__``aruuvvw ,C:\WINDOWS\Desktop\Audio_Steganalysis_15.doc ,C:\WINDOWS\Desktop\Audio_Steganalysis_15.doc ,C:\WINDOWS\Desktop\Audio_Steganalys
is_15.doc ,C:\WINDOWS\Desktop\Audio_Steganalysis_15.doc ,C:\WINDOWS\Desktop\Audio_Steganalysis_15.doc ,C:\WINDOWS\Desktop\Audio_Steganalysis_15.doc ,C:\WINDOWS\Desktop\Audio_Steganalysis_15.doc
Hamza ZER'C:\Study\Spie\Audio_Steganalysis_16.doc
Hamza ZERsC:\Documents and Settings\Hamza ZER\Application Data\Microsoft\Word\AutoRecovery save of Audio_Steganalysis_16.asdsankurfC:\Documents and Settings\sankur.SANKUR2000\My Documents\Makale\HAMZA\SPIE03\Audio_Steganalysis_16.doc
]lrZ7ܩ6k*~C0(Xܩ7n
nz1q"֨T2y}bde{#:^`o(^`.pLp^p`L.@@^@`.^`.L^`L.^`.^`.PLP^P`L.^`OJQJo(^`OJQJo(o ^ `OJQJo(^`OJQJo(xx^x`OJQJo(oHH^H`OJQJo(^`OJQJo(^`OJQJo(o^`OJQJo(^`o(^`.pLp^p`L.@@^@`.^`.L^`L.^`.^`.PLP^P`L.h^`o([]h^`.hpLp^p`L.h@@^@`.h^`.hL^`L.h^`.h^`.hPLP^P`L.^`OJPJQJ^Jo(^`OJQJo(o ^ `OJQJo(^`OJQJo(xx^x`OJQJo(oHH^H`OJQJo(^`OJQJo(^`OJQJo(o^`OJQJo(^`o(^`.pLp^p`L.@@^@`.^`.L^`L.^`.^`.PLP^P`L.^`o()^`.pLp^p`L.@@^@`.^`.L^`L.^`.^`.PLP^P`L.h^`.h^`.hpLp^p`L.h@@^@`.h^`.hL^`L.h^`.h^`.hPLP^P`L.h^`.h^`.hpLp^p`L.h@@^@`.h^`.hL^`L.h^`.h^`.hPLP^P`L.808^8`0CJo(.^`.pLp^p`L.@@^@`.^`.L^`L.^`.^`.PLP^P`L.
z1q7n6k*e{]lrC0T2y#Z7X
n + t & P N Ä QRY""###)#C#D#c######$$8$S$p$q$$$$$$$$$%%L%M%r%s%%%%%%%///#/'/+/0/4/8/1?1D1E1G1I1K1L1M1N1O1P1Q1R1S1T1U1V1W1X1Y1Z1[1`1a1c1e1g1h1j1k1l1m1n1p1r1s1t1u1w1y1z11}11111111111111111111111111111111111111111111111111111111111111111111111111111111111222
22222#2'2+2/24292>2C2H2M2R2V2Y2]2c2d2i2j2l2n2p2q2r2s2t2u2w2x2z2{22}2~2222222222222222222222222222222222222222222222222222222222222222222222222222222223333333333333333333333 3!3"3#3$3%3&3?GFGVG}G~GGGGGGGGGGGGGGGGGGGGGGGGGHHHHHHH"H'H,H1H2H9H>HCHHHMHNHHHHHHHHHHHHIIIIIII!I&I+I0I1I6I;I@IEIJIKISIYI_IeIjIkIqIvI{IIIIIIIIIIMMMMNN
NNNNN N%N*N/N4N9N=N@NDNJNKNSNUNWNYN[N]N^N`NbNcNeNgNiNjNkNlNnNpNrNtNuN}N~NNNNNNNNNNNNNNNNNNNNNNNOOO'O2O
This value indicates the number of saves or revisions. The application is responsible for updating this value after each revision.
DocumentLibraryFormDocumentLibraryFormDocumentLibraryForm