1 Introduction
According to the World Bank, in more than of the world population had access to electrical energy, which is made available to people via a complex transmission and distribution system that interconnects power plants to consumers. In the operation of this system two types of losses are expected: technical and nontechnical losses. Technical Losses (TL) occur due to power dissipation in the materials that compose the electrical power system itself, such as cables, connectors, and power transformers. NonTechnical Losses (NTL) represent energy losses due to energy theft and errors of billing or measurement [1].
According to the Electricity Distribution Loss Report published by ANEEL (Brazilian National Electricity Agency) [2], NTLs comprised about of all energy injected into the Brazilian electrical power system in . These losses impact consumers with more expensive energy bills, electricity distribution companies with reduced revenues, and the reliability of the electrical power system [3]. Part of the of the problem of tackling NTLs is dealing with the metering infrastructure itself, which is pointed out as being the most faulty subsystem [3]
. Recent advances in the Internet of Things (IoT) made possible addressing these problems by the adoption of Advanced Metering Infrastructures (AMIs), that can provide consumption data with high temporal resolution, thus reducing losses related to billing and metering issues. Together with AMIs, artificial intelligence algorithms can play an important role in detecting NTLs due to electricity theft in power distribution system
[4, 5].In this work, we developed a predictive method using a supervised learning technique with deep learning methodologies applied to to identify fraudulent consumer units. We train and evaluate our models on a dataset of
months of daily electricity consumption. The work brings several improvements compared with the previous stateoftheart method [5], such as the usage of Quantile normalization on the original data, the usage of an additional binary input channel to deal with missing values and the usage of attention mechanism.
Our results show that the usage of a model with attention mechanism layers delivered an increment of on the Area Under the Curve (AUC) score when compared to the baseline. The combination of this model with a the binary input channel (Binary Mask) and Quantile normalization improved the AUC and the F.
The article is organized as follows: in section 2 we present an overview of related works; in section 3 we present the problem and the methodology adopted; in section 4 we detail the proposed solution and the metrics used to evaluate the performance of the algorithms; in section 5 we describe the data processing steps; section 6 presents the results obtained; and finally, in section 7 we describe our conclusions and future perspectives.
2 Related Work
The application of deep learning in NTLs detection has increased in recent years. Several approaches to the problem have been proposed and the results obtained are significantly superior when compared to those from rulebased traditional methods [1, 6, 5]. However, one of the main difficulties in developing datadriven models for NTLs detection in the electricity industry is the lack of data publicly available. Energy consumption is a sensitive data and due to privacy and security issues the vast majority of electricity distribution companies do not share their data. One of the ways to circumvent this problem is to generate synthetic data. For instance, Liu et al. [4] inject artificial electricity thefts into a database of regular consumers. Although useful, the generation of synthetic data may lead to unintentional introduction of bias or misrepresentation of real situations.
Zheng et at. [5]
present a study using a dataset with real electricity theft data provided by State Grid Corporation of China (SGCC). This study, which has become a baseline for following recent works, introduces a neural network architecture based on a wide (dense) and a deep (convolutional) component trained together. Moreover, their proposed reshaping of the 1D electricity consumption data sequences into a 2D format has provided a straightforward way to explore neighborhood correlations with 2D convolutional neural network (CNN). Hasan et al.
[7]uses real electricity theft data, they propose a combination of CNN and LSTM (Long ShortTerm Memory) architectures in order to explore the timeseries nature of the electricity consumption data. Nonetheless, satisfactory results were achieved only after applying the synthetic minority oversampling technique (SMOTE)
[8] to account for the imbalanced dataset.In Li et al. [9]
, a combination of CNN with Random Forest (RF) algorithm is applied on a dataset of over 5000 residential and businesses consumers provided by the Electric Ireland and Sustainable Energy Authority of Ireland (SEAI), with thieves being synthetically injected. Also motivated by the data reshaping introduced by Zheng et al. (2018), the authors reshaped the electricity consumption data into a 2D format, allowing a more generalized feature extraction by the CNN.
3 Problem Analysis
Our task is to detect fraud in electricity consumption. The dataset is a collection of real electricity consumption samples and was released by the State Grid Corporation of China (SGCC). The data is a sequence of daily electricity consumption, which we characterize as a time series. The basic assumption that guides the analysis of time series is that there is a causal system more or less constant, related to time, which influenced the data in the past and can continue to do so in the future. The purpose of time series analysis is to identify nonrandom patterns in the daily electricity consumption behavior that allows more accurate predictions. See section 5 for a time series analysis and autocorrelations for the problem at hand.
3.1 Data Methodology
A important contribution from Zheng et al. [5] is the transformation of one dimensional data into bidimensional (Figure 1
). A 2D format allows the exploration of periodicity and neighborhood characteristics with the usage of a computer vision models, such as 2D convolutional neural networks.
3.2 Missing data
Missing data is an ubiquitous problem. In the literature we find two common practices to deal with them. One approach is to delete the incomplete reading from the dataset. However, this approach may dismiss valuable information. An alternative is to estimate the missing value using interpolation or with the median or mean of the data feature
[10]. Although those techniques have been proven effective, they impose strong assumptions about the nature of the missing data and hence might bias the predictive models negatively. In addition to these methods, attempts to find approximations to fill the missing data using genetic algorithm, simulated annealing and particle swarm optimization have also been proposed
[11]. However, when dealing with large datasets such techniques can be prohibitively slow.To deal with the missing values, we create a binary mask as an additional channel of the input as follows: First, we identify the indices of all missing data and create a binary mask, where the missing data receives value and all remainder values receives . We call this mask Binary Mask. The missing data at the values channel receives a value of . These channels are the input to a 2D CNN. See Figure 2 for an illustration of our method.
4 Architecture overview
Image recognition is a classic classification problem where CNNs have a history of high efficacy [12, 13]
. As our data input resembles an image, we developed two models to address the problem, both using 2D convolutions: a CNN and a multihead attention model. Attention models are used in many Natural Language Processing (NLP) tasks and have been recently adapted to vision problems
[14].4.1 CNN Architecture
Our CNN model has layers of 2D convolutional operators with kernel size of 3: First layer has channels as input and as outputs; The second layer outputs channels with a nonlinear activation PReLU [15]; The third and final convolutional layer outputs
channels over a dilated kernel with a stride factor of
which is followed by PReLU activation function. All convolutional layers have kernel size
. The convolutional output is flattened and connected to a fully connected layer, Figure 3 summarizes the model.Dilation is a practice to increase the receptive view using sparse filters [16]. The convolution itself is modified to use the filter parameters in a sparse way as it skips a fixed number of features along both dimensions at regular intervals, albeit the sparsity, dilated convolutions do not lose resolution. The stride or subsampling factor as mentioned in [17] is the step of the convolution used to reduce the overlap of receptive fields and spatial dimensions which can be seen as an alternative to pooling layers.
4.2 Multiheads Attention Architecture
Attention mechanisms have shown great ability to solve many kind of problems, ranging from NLP tasks [18] to computer vision [19] and tabular data [20]. Inspired by the recent advances we propose a novel Neural Network that takes advantage of both attention mechanisms and convolutional layers that are concatenated and unified through a convolution of kernel size . We start by describing the inner works of the convolutional part.
Convolutional Layer: Our convolutional layer is composed of two parts, one will perform standard convolutions over the inputs, while the other part applies a convolution with dilation factor of , both layers utilizes a kernel size of , the results are concatenated to form a single output.
Attention Mechanism: Our attention mechanism differs from standard approaches by looking at the channels of the input as the heads and mapping them to another set of attention heads, that is, given an input of shape we first transpose the first two dimensions and flatten it into a matrix of shape , let
be learnable linear transformations, where
is the number of channels or heads coming in, is the size of the sequence, is the dimension of every element in the sequence and is the number of output heads or channels, we start by computing , . Second we map back to a tridimensional shape by unflatenning and transposing so that , finally we compute the output of the attention layer as follows:(1) 
Summarizing, given an input we perform the following mapping:
(2) 
This allows for consistency of the output shape between the attention and convolutional layers.
Unification: After the input is processed both by the attention and convolutional layers we concatenate the results into a single matrix and unify it through a convolution of kernel size followed by Layer Norm and PReLU activation function. We called this a Hybrid MultiHead Attention/Dilated Convolution Layer.
Classifier: Finally the output of a sequence of these hybrid layers is flattened and fed to a linear feedforward neural network that will predict the input class.
Our final architecture is composed of two hybrid layers, where the first has heads and outputs heads while the convolutional part receives a channel 2D input and outputs a channel matrix of the same size, the unification is fed to a second hybrid layer with the same dimensions, lastly a one layer dense neural network with PReLU as activation function and neurons on its hidden layer classifies the input. Figure 4 shows the model.
4.3 Metrics
In this work we evaluate our models with AUC that represents the data separability degree and the ROC curve which depicts the probability curve created by plotting the rate of true positives versus the rate of false positives. The AUC is the area under this curve that summarizes the ROC curve in a single value.
We also evaluate the performance on the F
score that combines precision and recall in order to bring a unique number that indicates the general quality of the model. Besides these metrics we use the Mean Average Precision (MAP)
[21] to measure the effectiveness of information retrieval. To evaluate the MAP we first ordered the true labels by the predicted probabilities and consider a subset of top probabilities given by the following equation:(3) 
where the is the true label of the consumer, if is a thief and
otherwise. For the loss function we decided to use the cross entropy which is a classic practice for classification problems.
5 Data
The SGCC data presents the daily consumption of consumer units with a total time window ranging from January to October , corresponding to approximately weeks. The data is divided into thieves and normal electrical consumers, where the first compose of the total. This data does not show the date when the fraud occurs. We tested data reshape 2D on a monthly and weekly basis, we decided to use a weekly period, as we noticed a more correlation between thieves and normal electricity customers.
Due to the granularity of the data, it is common to have a significant number cases of missing values and there are approximately 25% of them. Our propose to handle the missing data was presented in section 3.2. The dataset description is showed in the Table 1. The same dataset was analyzed in [5], where the authors used an Wide and Deep architecture [22], more details about this study is described in section 6.1.
Description  Value 

Time window  2014/01/01 – 2016/10/31 
Normal electricity customers  38 757 approx. 91.5% 
Electricity thieves  3 615 approx. 8.55% 
Total customers  42 372 
Missing data cases  approx. 25% 
5.1 Data Preprocessing
Data processing is a key element that determine the success or failure in many deep learning models. In our analysis the realistic SGCC data has some particular features, including a significant number of missing data, a long tail distribution which produces strong skewness and kurtosis. The missing data is discussed in section
3.2. For the atypical data, or outliers, we noticed that most of the cases occur in the normal electricity costumers and we did not remove these cases to avoid losing useful information. Prior to the normalization of the data, we studied the dataset as a time series due to the fact that there is only one variable performed at uniform intervals. To evaluate possible correlations and periodicity, two experiments were conducted: (I) we accumulated the electricity consumption over the days of the week (from Monday to Sunday) and constructed a correlation matrix between the days of the week for thieves and normal electricity customers, as illustrated in Figure 5.(II)
In order to find periodicity and pattern recognition between classes we use the autocorrelation function, which provides the correlation of a time series with its own lagged values, Figure
6. The axis indicates the interval time being considered, where meaning a lag of intervals; axis is the autocorrelation score and is the highest possible score. Top: normal electrical customers, Bottom: thieves.The analysis from Figures 5 and 6 shows some difference between thieves and normal electricity customers. In particular, the greater correlation observed between days of the week for the thieves suggests that this feature could be exploited to improve model performance, in another words, the thieves have similar behaviour.
The SGCC data has a phenomenon called heteroscedasticity (nonconstant variability)
[23], which causes the resulting distribution to be asymmetric positive or Leptokurtic [24], i.e., there is great variability on the right side of the distribution which creates a long tail, as shown in Figure 7Top. This asymmetry can lead to spurious interactions in the deep learning model due to nonconstant variations. To deal with this asymmetry distribution we perform a Quantile uniform normalization provide by [25]. The Quantile uniform transformation is a nonlinear function which is applied on each feature data independently. This normalization spreads out the most frequent values between . First, the Quantile map the original values to estimate the cumulative distribution, then these values are spread out into numbers of quantiles. In our approach we use quantiles. A distribution of the data processed is shown in Figure 7 on the Bottom. One problem that Quantile transform has is the the number of data required to performed the transformation. As a rule of thumb, to create quantiles, a minimum of samples is required.In addition to processing Quantile, we also tested a YeoJohnson power transform [26], but the transformed values were between and with Quantile between
. We also verified the KullbackLeibler Divergence (
) [27]to a uniform distribution is minimized.
is a practice of measuring the matching between two distributions, given by the formula:(4) 
where is the distribution of the data transformed by Quantile and is the ground truth, in our case a uniform distribution and we are interested in matching to . A lower value means a better and matched. The Table 2 shows the values before and after Quantile transformation.
The processed dataset has less Kurtosis and Skewness, which brings stationarity to the data by Kwiatkowski, Phillips, Schmidt and Shin (KPSS) [28] test with level equals
. Namely the data variance, mean and covariance has more stationary behavior and its statistical properties do not change over time in the columns where the KPSS test is
True, Table 2.Property  Raw data  Processed data 

Min  0.00  0.00 
Max  800003.31  1.00 
Mean  6.87  0.40 
Std  236.14  0.35 
Skewness  2551.62  0.01 
Kurtosis  7170709.11  1.67 
15121.81  57.15  
KPSS test  False: 1016 / True: 19  False: 581 / True: 454 
6 Experiments
In this section we describe the experiments performed in this work. In addition to the two models developed, we also compared our attention model with the Attention Augmented Convolutional Network [19]. To evaluate the proposed modification for the missing data described in section 3.2, we also performed an experiment with and without a Binary Mask. All training sessions were performed with different train percentages splits and with stratified kfold.
6.0.1 Binary Mask Experiment
Using stratified kfold with the Hybrid Multihead Attention Dilation Convolutional model and training split we evaluated the percentage difference of the data with Binary Mask and without. When there’s NonBinary Mask, all missing data was filled with value, Table 3 presents results of this experiment where the column name Only Quantile refers to NonBynary Mask.
6.0.2 Attention Augmented Convolution Network
We implemented the Attention Augmentation Convolutional Network algorithm proposed in [19]. Which is a selfattention algorithm developed for twodimensional tasks as an alternative to CNN networks. The authors combine features extracted from the convolutional layers with selfattention through concatenation. The experiment was performed with stratified kfold in different train splits size. Table 4 shows the results.
6.1 Baselines
Detection of electrical fraud with granular data using Deep Learning techniques are still rare to be found in the literature. The dataset on which this work was developed is a real data, which makes it even rarer. To compare our model with other approaches, we will use [5] that made the dataset available. These authors developed a study with Wide and Deep technique [29]. The Wide component try to memorize the global knowledge and the CNN layers capture features of electricity consumption data. These two components associated resulted in a good performance with an AUC metric up to and above .
6.1.1 Dataset preprocessed with Missing Values Interpolated
Our aim in this experiment is to conduct:

The Quantile transformation contributed positively to our preprocessing data proposal

The Hybrid MultiHead Attention/Dilated Convolution outperformed the Wide and Deep model [22] in the same data.
For this, we preprocessed the SGCC dataset with the equations and as in Zheng et al. [5] and trained our model in the split with stratified kfold. Results are presented in Table 3, column name Interpolated Missing Values
. With the same dataset configuration as our baseline, we improve all the metric scores and the results presented are the average values for all folds at the same epoch. To show the Quantile transformation is efficient, we need to compare the results obtained in Table
3 between the columns name Only Quantile and Interpolated Missing Values.Interpolated  Only  Quantile +  

Metric  Missing Values  Quantile  Binary Mask 
AUC 
0.840  0.889  0.925 
F score  0.365  0.504  0.606 
MAP@100  0.960  0.972  0.992 
MAP@200  0.941  0.961  0.972 
6.2 Results and Discussion
Table 4 presents the main results of the models developed in this work. The three train splits of and were tested with stratified kfold. The Hybrid MultiHead Attention Dilation/Convolution significantly outperformed the baseline. Moreover, the score obtained with our two models and with the Attention Augmented Convolutional Newtwork shows that the Quantile transformation brings a significant improvement to the data preprocessing. The attention mechanism produces a notable increase in the F score. Another distinguished behaviour is the much faster convergence of the attention model compared with the CNN model. In our tests the CNN needed approximately epochs to converge while the Hybrid Attention converged with approximately epochs. Figure 8 presents the evolution of the scores as a function of the epoch for the Hybrid MultiHead Attention Dilation/Convolution.
Model  Metric  train = 50%  train = 75%  train = 80% 

Conv. 
AUC  0.898  0.920  0.922 
Neural  F score  0.477  0.508  0.530 
Network  MAP@100  0.977  0.978  0.979 
MAP@200  0.969  0.970  0.976  
Hybrid  AUC  0.903  0.926  0.925 
MultiHead  F score  0.553  0.583  0.606 
Attention  MAP@100  0.996  0.988  0.992 
Dil. Conv.  MAP@200  0.981  0.971  0.972 
Attention 
AUC  0.881  0.902  0.911 
Augmented  F score  0.503  0.543  0.551 
Conv.  MAP@100  0.956  0.969  0.969 
Network  MAP@200  0.948  0.956  0.952 
With respect the time spent during the training and inference the Table 5 shows the average time spent for epoch in folds in the training and total time needed to infer the valid data which is % of the dataset. The results achieved enable the establishment of protocols for suspected cases inspection with high assertiveness. However, it is necessary to note that the choice of the threshold is an important point for decision making. Our model has an optimal threshold of , as shown in Figure 9, which produces a F score of . Note that when a threshold is used there is a tradeoff between Precision and Recall. In other words, if Precision is prioritized, we must choose a threshold greater than . The Table 4
and the confusion matrix in Figure
10 correspond to threshold .Model  Training time  Inf. time  # Params. 

CNN  2min 27seg  32seg  3Mi 
Hybrid Attn.  3min 16seg  37seg  51Mi 
Attn. Augmented  2min 40seg  20seg  17Mi 
7 Conclusion
In this paper, we introduced a Hybrid multihead selfattention dilated convolution method for electricity theft detection with realistic imbalanced data. We apply three innovations to improve upon the previous baseline work:

A Quantile normalization of the dataset;

The introduction of a second channel to the input called Binary Mask;

A novel model of multihead selfattention.
Another key element is the time series data reshape in 2D format introduced by [5, 9] allowing to treat the consumer sample as an image and to use CNNs. Our attention model overperformed the CNN model developed up to points of F and converged in epochs, approximately hour and min compared with epochs in CNN, approximately hours and min.
The model presented in [19] was the inspiration for our attention model. The unification step that combines the outputs from the attention, normal and dilated convolution, allowing that information from different spatial sizes and sources be merged, is the core of our model’s architecture. The characteristics of our model do not emerge from the used data, that said, problems on computer vision, for instance, could also be solved by it.
Due to the high number of missing values in the data (approx. 25%). Classic attempts to reconstruct these values can bring a significant bias resulting in poor solutions. With the addition of the Binary Mask we improved the F score em approximately points to the best of our knowledge this is the first time that the a Binary Mask was introduced as input channel into a CNN for dealing with missing data. Deep learning solutions in electricity theft detection are rare in the literature. To incentive the research in this field we are providing the code in a repository of GitHub https://github.com/neuralmindai/electricitytheftdetectionwithselfattention and the dataset can be found at another repository https://github.com/henryRDlab/ElectricityTheftDetection/ . The results obtained in this study demonstrate that still exist space for advances into the results obtained by Deep Learning techniques applied to electricity theft detection in smart real metered data.
7.1 Future Work
The insights produced and experience gained from this work will be used in future experiments involving energy such as energy consumption forecasting and fraud detection in the context of another AMI framework, where data will be available at almost real time with higher sampling rate.
7.2 Acknowledgments
This work is funded by ENEL in ANEEL R&D Program PD_06072_06 61/2018. Roberto Lotufo thanks CNPQ’s support through the research project PQ2018, process number .
References
 [1] Patrick Glauner, Jorge Augusto Meira, Petko Valtchev, Radu State, and Franck Bettinger. The challenge of nontechnical loss detection using artificial intelligence: A survey. International Journal of Computational Intelligence Systems, 10(1):760, 2017.
 [2] ANEEL. Perdas de energia elétrica na distribuição. http://www.aneel.gov.br/documents/654800/18766993/Relatório+Perdas+de+Energia_+Edição+120190207.pdf, 1 2019.
 [3] Abhishek Chauhan and Saurabh Rajvanshi. Nontechnical losses in power system: A review. pages 558–561, 02 2013.
 [4] Liyuan Liu, Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, and Jiawei Han. On the variance of the adaptive learning rate and beyond, 2019.
 [5] Zibin Zheng, Yatao Yang, Xiangdong Niu, HongNing Dai, and Yuren Zhou. Wide & deep convolutional neural networks for electricitytheft detection to secure smart grids. IEEE Transactions on Industrial Informatics, 14:1606–1615, 04 2018.
 [6] Bo Li, Kele Xu, Xiaoyan Cui, Yiheng Wang, Xinbo Ai, and Yanbo Wang. MultiScale DenseNetBased Electricity Theft Detection. arXiv:1805.09591 [cs], May 2018. arXiv: 1805.09591.
 [7] Md Nazmul Hasan, Rafia Nishat Toma, AbdullahAl Nahid, M. M. Manjurul Islam, and JongMyon Kim. Electricity Theft Detection in Smart Grid Systems: A CNNLSTM Based Approach. Energies, 12(17):3310, January 2019.

[8]
Runtao Yang, Chengjin Zhang, Rui Gao, and Lina Zhang.
A novel feature extraction method with feature selection to identify golgiresident protein types from imbalanced data.
International Journal of Molecular Sciences, 17(2):218, Feb 2016.  [9] Shuan Li, Yinghua Han, Xu Yao, Song Yingchen, Jinkuan Wang, and Qiang Zhao. Electricity Theft Detection in Power Grids with Deep Learning and Random Forests, 2019.
 [10] Mohd Mustafa Al Bakri Abdullah. Filling missing data using interpolation methods: Study on the effect of fitting distribution. Key Engineering Materials, 594595:889–895, 01 2014.
 [11] Collins Leke, Bhekisipho Twala, and Tshilidzi Marwala. Missing data prediction and classification: The use of autoassociative neural networks and optimization algorithms. CoRR, abs/1403.5488, 2014.
 [12] Yann Lecun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradientbased learning applied to document recognition. In Proceedings of the IEEE, pages 2278–2324, 1998.
 [13] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 25, pages 1097–1105. Curran Associates, Inc., 2012.
 [14] Niki Parmar, Prajit Ramachandran, Ashish Vaswani, Irwan Bello, Anselm Levskaya, and Jon Shlens. Standalone selfattention in vision models. In Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d’AlchéBuc, Emily B. Fox, and Roman Garnett, editors, Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, 814 December 2019, Vancouver, BC, Canada, pages 68–80, 2019.
 [15] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing humanlevel performance on imagenet classification. CoRR, abs/1502.01852, 2015.
 [16] Fisher Yu and Vladlen Koltun. Multiscale context aggregation by dilated convolutions. 11 2015.
 [17] Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation, 2014.
 [18] Jacob Devlin, MingWei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pretraining of deep bidirectional transformers for language understanding, 2018.
 [19] Irwan Bello, Barret Zoph, Ashish Vaswani, Jonathon Shlens, and Quoc V. Le. Attention augmented convolutional networks, 2019.
 [20] Sercan O. Arik and Tomas Pfister. Tabnet: Attentive interpretable tabular learning, 2019.
 [21] Andrew Turpin and Falk Scholer. User performance versus precision measures for simple search tasks. In Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’06, page 11–18, New York, NY, USA, 2006. Association for Computing Machinery.
 [22] HengTze Cheng, Levent Koc, Jeremiah Harmsen, Tal Shaked, Tushar Chandra, Hrishi Aradhye, Glen Anderson, Greg Corrado, Wei Chai, Mustafa Ispir, Rohan Anil, Zakaria Haque, Lichan Hong, Vihan Jain, Xiaobing Liu, and Hemal Shah. Wide & deep learning for recommender systems. CoRR, abs/1606.07792, 2016.
 [23] James Knaub. Heteroscedasticity and homoscedasticity. Vol 2:431–432, 01 2007.
 [24] Egon S. Pearson. The analysis of variance in cases of nonnormal variation. Biometrika, 23(1/2):114–133, 1931.

[25]
F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel,
M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos,
D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay.
Scikitlearn: Machine learning in Python.
Journal of Machine Learning Research, 12:2825–2830, 2011.  [26] InKwon Yeo and Richard Johnson. A new family of power transformations to improve normality or symmetry. Biometrika, 87, 12 2000.
 [27] S. Kullback and R. A. Leibler. On information and sufficiency. Ann. Math. Statist., 22(1):79–86, 03 1951.

[28]
Denis Kwiatkowski, Peter Phillips, Peter Schmidt, and Yongcheol Shin.
Testing the null hypothesis of stationarity against the alternative of a unit root: How sure are we that economic time series have a unit root?
Journal of Econometrics, 54(13):159–178, 1992.  [29] HengTze Cheng, Levent Koc, Jeremiah Harmsen, Tal Shaked, Tushar Chandra, Hrishi Aradhye, Glen Anderson, Greg Corrado, Wei Chai, Mustafa Ispir, Rohan Anil, Zakaria Haque, Lichan Hong, Vihan Jain, Xiaobing Liu, and Hemal Shah. Wide & deep learning for recommender systems. CoRR, abs/1606.07792, 2016.
Comments
There are no comments yet.