Friday 9 June 2017

Importância Da Média Móvel Na Série De Tempo


Média móvel de dados de séries temporais (observações igualmente espaçadas no tempo) de vários períodos consecutivos. Chamado de movimento porque é continuamente recalculado à medida que novos dados se tornam disponíveis, ele progride caindo o valor mais antigo e adicionando o valor mais recente. Por exemplo, a média móvel das vendas de seis meses pode ser calculada tomando a média das vendas de janeiro a junho, depois a média das vendas de fevereiro a julho, depois de março a agosto, e assim por diante. As médias móveis (1) reduzem o efeito de variações temporárias nos dados, (2) melhoram o ajuste dos dados para uma linha (um processo chamado suavização) para mostrar a tendência dos dados mais claramente e (3) realçam qualquer valor acima ou abaixo do valor tendência. Se você está calculando algo com variação muito alta o melhor que você pode ser capaz de fazer é descobrir a média móvel. Eu queria saber qual era a média móvel dos dados, então eu teria uma melhor compreensão de como estávamos fazendo. Quando você está tentando descobrir alguns números que mudam muitas vezes, o melhor que você pode fazer é calcular a média móvel. 2.1 Modelos de média móvel (modelos MA) Modelos de séries temporais conhecidos como modelos ARIMA podem incluir termos autorregressivos ou termos de média móvel. Na Semana 1, aprendemos um termo autorregressivo em um modelo de séries temporais para a variável x t é um valor retardado de x t. Por exemplo, um termo autorregressivo de atraso 1 é x t-1 (multiplicado por um coeficiente). Esta lição define termos de média móvel. Um termo de média móvel em um modelo de séries temporais é um erro passado (multiplicado por um coeficiente). Vamos (wt desviar N (0, sigma2w)), significando que os w t são identicamente, distribuídos independentemente, cada um com uma distribuição normal com média 0 e a mesma variância. O modelo de média móvel de ordem 1, denotado por MA (1) é (xt mu wt theta1w) O modelo de média móvel de 2ª ordem, denotado por MA (2) é (xt mu wt theta1w theta2w) , Denotado por MA (q) é (xt mu wt theta1w theta2w pontos thetaqw) Nota. Muitos livros didáticos e programas de software definem o modelo com sinais negativos antes dos termos. Isso não altera as propriedades teóricas gerais do modelo, embora ele inverta os sinais algébricos de valores de coeficientes estimados e de termos (não-quadrados) nas fórmulas para ACFs e variâncias. Você precisa verificar seu software para verificar se sinais negativos ou positivos foram usados ​​para escrever corretamente o modelo estimado. R usa sinais positivos em seu modelo subjacente, como fazemos aqui. Propriedades Teóricas de uma Série de Tempo com um Modelo MA (1) Observe que o único valor não nulo na ACF teórica é para o atraso 1. Todas as outras autocorrelações são 0. Assim, uma ACF de amostra com uma autocorrelação significativa apenas no intervalo 1 é um indicador de um possível modelo MA (1). Para os estudantes interessados, provas destas propriedades são um apêndice a este folheto. Exemplo 1 Suponha que um modelo MA (1) seja x t 10 w t .7 w t-1. Onde (wt overset N (0,1)). Assim, o coeficiente 1 0,7. O ACF teórico é dado por Um gráfico deste ACF segue. O gráfico apenas mostrado é o ACF teórico para um MA (1) com 1 0,7. Na prática, uma amostra normalmente não proporciona um padrão tão claro. Usando R, simulamos n 100 valores de amostra usando o modelo x t 10 w t .7 w t-1 onde w t iid N (0,1). Para esta simulação, segue-se um gráfico de séries temporais dos dados da amostra. Não podemos dizer muito desse enredo. A ACF de amostra para os dados simulados segue. Observamos que a amostra ACF não corresponde ao padrão teórico do MA subjacente (1), ou seja, que todas as autocorrelações para os atrasos de 1 serão 0 Uma amostra diferente teria uma ACF de amostra ligeiramente diferente mostrada abaixo, mas provavelmente teria as mesmas características gerais. Propriedades teóricas de uma série temporal com um modelo MA (2) Para o modelo MA (2), as propriedades teóricas são as seguintes: Note que os únicos valores não nulos na ACF teórica são para os retornos 1 e 2. As autocorrelações para atrasos maiores são 0 . Assim, uma ACF de amostra com autocorrelações significativas nos intervalos 1 e 2, mas autocorrelações não significativas para atrasos maiores indica um possível modelo MA (2). Iid N (0,1). Os coeficientes são 1 0,5 e 2 0,3. Como este é um MA (2), o ACF teórico terá valores não nulos apenas nos intervalos 1 e 2. Os valores das duas autocorrelações não nulas são: Um gráfico do ACF teórico segue. Como quase sempre é o caso, dados de exemplo não vai se comportar tão perfeitamente como a teoria. Foram simulados n 150 valores de amostra para o modelo x t 10 w t .5 w t-1 .3 w t-2. Onde w t iid N (0,1). O gráfico de série de tempo dos dados segue. Como com o gráfico de série de tempo para os dados de amostra de MA (1), você não pode dizer muito dele. A ACF de amostra para os dados simulados segue. O padrão é típico para situações em que um modelo MA (2) pode ser útil. Existem dois picos estatisticamente significativos nos intervalos 1 e 2, seguidos por valores não significativos para outros desfasamentos. Note que devido ao erro de amostragem, a ACF da amostra não corresponde exactamente ao padrão teórico. ACF para Modelos Gerais MA (q) Uma propriedade dos modelos MA (q) em geral é que existem autocorrelações não nulas para os primeiros q lags e autocorrelações 0 para todos os retornos gt q. Não-unicidade de conexão entre os valores de 1 e (rho1) no modelo MA (1). No modelo MA (1), para qualquer valor de 1. O recíproco 1 1 dá o mesmo valor para Como exemplo, use 0,5 para 1. E então use 1 (0,5) 2 para 1. Você obterá (rho1) 0,4 em ambas as instâncias. Para satisfazer uma restrição teórica chamada invertibilidade. Restringimos modelos MA (1) para ter valores com valor absoluto menor que 1. No exemplo dado, 1 0,5 será um valor de parâmetro permitido, enquanto 1 10,5 2 não. Invertibilidade de modelos MA Um modelo MA é dito ser inversível se for algébrica equivalente a um modelo de ordem infinita convergente. Por convergência, queremos dizer que os coeficientes de RA diminuem para 0 à medida que avançamos no tempo. Invertibilidade é uma restrição programada em séries temporais de software utilizado para estimar os coeficientes de modelos com MA termos. Não é algo que verificamos na análise de dados. Informações adicionais sobre a restrição de invertibilidade para modelos MA (1) são fornecidas no apêndice. Teoria Avançada Nota. Para um modelo MA (q) com um ACF especificado, existe apenas um modelo invertible. A condição necessária para a invertibilidade é que os coeficientes têm valores tais que a equação 1- 1 y-. - q y q 0 tem soluções para y que caem fora do círculo unitário. Código R para os Exemplos No Exemplo 1, traçamos o ACF teórico do modelo x t 10w t. 7w t-1. E depois simularam n 150 valores a partir deste modelo e traçaram a amostra de séries temporais ea amostra ACF para os dados simulados. Os comandos R utilizados para traçar o ACF teórico foram: acfma1ARMAacf (mac (0.7), lag. max10) 10 lags de ACF para MA (1) com theta1 0.7 lags0: 10 cria uma variável chamada lags que varia de 0 a 10. plot (Lags, acfma1, xlimc (1,10), ylabr, typeh, ACF principal para MA (1) com theta1 0,7) abline (h0) adiciona um eixo horizontal ao gráfico O primeiro comando determina o ACF e o armazena em um objeto Chamado acfma1 (nossa escolha de nome). O comando de plotagem (o terceiro comando) traça defasagens em relação aos valores de ACF para os retornos 1 a 10. O parâmetro ylab rotula o eixo y eo parâmetro principal coloca um título no gráfico. Para ver os valores numéricos do ACF basta usar o comando acfma1. A simulação e as parcelas foram feitas com os seguintes comandos. Xcarima. sim (n150, lista (mac (0.7))) Simula n 150 valores de MA (1) xxc10 adiciona 10 para fazer a média 10. Padrões de simulação significam 0. plot (x, typeb, mainSimulated MA (1) data) Acf (x, xlimc (1,10), mainACF para dados de amostras simulados) No Exemplo 2, traçamos o ACF teórico do modelo xt 10 wt. 5 w t-1 .3 w t-2. E depois simularam n 150 valores a partir deste modelo e traçaram a amostra de séries temporais ea amostra ACF para os dados simulados. Os comandos R utilizados foram acfma2ARMAacf (mac (0,5,0,3), lag. max10) acfma2 lags0: 10 parcela (lags, acfma2, xlimc (1,10), ylabr, tipoh, ACF principal para MA (2) com theta1 0,5, (X, typeb, main Simulado MA (2) Series) acf (x, xlimc (1,10), x2, MainACF para dados simulados de MA (2) Apêndice: Prova de Propriedades de MA (1) Para estudantes interessados, aqui estão as provas para propriedades teóricas do modelo MA (1). Quando h 1, a expressão anterior 1 w 2. Para qualquer h 2, a expressão anterior 0 (x) é a expressão anterior x (x) A razão é que, por definição de independência do wt. E (w k w j) 0 para qualquer k j. Além disso, porque w t tem média 0, E (w j w j) E (w j 2) w 2. Para uma série de tempo, aplique este resultado para obter o ACF fornecido acima. Um modelo MA reversível é aquele que pode ser escrito como um modelo de ordem infinita AR que converge de modo que os coeficientes AR convergem para 0 à medida que nos movemos infinitamente para trás no tempo. Bem demonstrar invertibilidade para o modelo MA (1). Em seguida, substitui-se a relação (2) para wt-1 na equação (1) (3) (zt wt theta1 (z-theta1w) wt theta1z-theta2w) No tempo t-2. A equação (2) torna-se Então substituimos a relação (4) para wt-2 na equação (3) (zt wt theta1 z - theta21w wt theta1z - theta21 (z - theta1w) wt theta1z-theta12z theta31w) Se continuássemos Infinitamente), obteríamos o modelo AR de ordem infinita (zt wt theta1 z - theta21z theta31z - theta41z pontos) Observe, no entanto, que se 1 1, os coeficientes multiplicando os desfasamentos de z aumentarão (infinitamente) Tempo. Para evitar isso, precisamos de 1 lt1. Esta é a condição para um modelo MA (1) invertible. Infinite Order MA model Na semana 3, bem ver que um modelo AR (1) pode ser convertido em um modelo de ordem infinita MA: (xt - mu wt phi1w phi21w pontos phik1 w dots sum phij1w) Esta soma de termos de ruído branco passado é conhecido Como a representação causal de um AR (1). Em outras palavras, x t é um tipo especial de MA com um número infinito de termos voltando no tempo. Isso é chamado de ordem infinita MA ou MA (). Uma ordem finita MA é uma ordem infinita AR e qualquer ordem finita AR é uma ordem infinita MA. Lembre-se na Semana 1, observamos que um requisito para um AR estacionário (1) é que 1 lt1. Vamos calcular o Var (x t) usando a representação causal. Esta última etapa usa um fato básico sobre séries geométricas que requer (phi1lt1) caso contrário, a série diverge. A modelagem de equações estruturais é uma técnica de análise multivariada muito geral, muito poderosa, que inclui versões especializadas de uma série de outros métodos de análise como casos especiais. Assumiremos que você está familiarizado com a lógica básica do raciocínio estatístico, como descrito em Conceitos Elementares. Além disso, também vamos supor que você está familiarizado com os conceitos de variância, covariância e correlação se não, aconselhamos que você leia a seção Estatísticas Básicas neste ponto. Embora não seja absolutamente necessário, é altamente desejável que você tenha alguma experiência em análise fatorial antes de tentar usar a modelagem estrutural. As principais aplicações da modelagem de equações estruturais incluem: modelagem causal. Ou análise de caminho. Que hipotetiza relações causais entre variáveis ​​e testa os modelos causais com um sistema de equações lineares. Modelos causais podem envolver variáveis ​​manifestas, variáveis ​​latentes, ou ambas as análises de fatores confirmatórios. Uma extensão da análise fatorial na qual são testadas hipóteses específicas sobre a estrutura das cargas de fatores e intercorrelações de análise fatorial de segunda ordem. Uma variação da análise fatorial na qual a matriz de correlação dos fatores comuns é ela própria fator analisada para fornecer modelos de regressão de fatores de segunda ordem. Uma extensão da análise de regressão linear em que os pesos de regressão podem ser limitados para serem iguais entre si, ou a modelos de estrutura de covariância de valores numéricos especificados. Que a hipótese de que uma matriz de covariância tem uma forma particular. Por exemplo, você pode testar a hipótese de que um conjunto de variáveis ​​têm variações iguais com este modelo de estrutura de correlação de procedimento. Que supor que uma matriz de correlação tem uma forma particular. Um exemplo clássico é a hipótese de que a matriz de correlação tem a estrutura de um circumplex (Guttman, 1954 Wiggins, Steiger, Gaelick, 1981). Muitos tipos diferentes de modelos caem em cada uma das categorias acima, de modo que a modelagem estrutural como uma empresa é muito difícil de caracterizar. A maioria dos modelos de equações estruturais pode ser expressa como diagramas de caminho. Por conseguinte, mesmo iniciantes para a modelagem estrutural pode realizar análises complicadas com um mínimo de treinamento. A idéia básica por trás da modelagem estrutural Uma das idéias fundamentais ensinadas em cursos de estatística aplicada intermediária é o efeito de transformações aditivas e multiplicativas em uma lista de números. Os alunos são ensinados que, se você multiplicar cada número em uma lista por alguma constante K, você multiplica a média dos números por K. Da mesma forma, você multiplica o desvio padrão pelo valor absoluto de K. Por exemplo, suponha que você tenha a lista Dos números 1,2,3. Estes números têm uma média de 2 e um desvio padrão de 1. Agora, suponha que você tomasse esses 3 números e os multiplicasse por 4. Então a média se tornaria 8 eo desvio padrão se tornaria 4, a variância assim 16. O ponto é, se você tiver um conjunto de números X relacionado a outro conjunto de números Y pela equação Y 4X, então a variância de Y deve ser 16 vezes a de X, então você pode testar a hipótese de que Y e X estão relacionados Pela equação Y 4X indiretamente, comparando as variâncias das variáveis ​​Y e X. Esta idéia generaliza, de várias maneiras, várias variáveis ​​inter-relacionadas por um grupo de equações lineares. As regras se tornam mais complexas, os cálculos mais difíceis, mas a mensagem básica permanece a mesma - você pode testar se as variáveis ​​estão inter-relacionadas através de um conjunto de relações lineares examinando as variâncias e covariâncias das variáveis. Os estatísticos desenvolveram procedimentos para testar se um conjunto de variâncias e covariâncias em uma matriz de covariância se ajusta a uma estrutura especificada. A maneira como a modelagem estrutural funciona é a seguinte: Você declara a maneira que você acredita que as variáveis ​​estão inter-relacionadas, muitas vezes com o uso de um diagrama de caminho. Você trabalha, através de algumas regras internas complexas, quais são as implicações disso para as variâncias e covariâncias das variáveis. Você testa se as variâncias e covariâncias se encaixam neste modelo deles. Os resultados dos testes estatísticos, bem como as estimativas dos parâmetros e os erros-padrão para os coeficientes numéricos nas equações lineares são relatados. Com base nessas informações, você decide se o modelo parece ser um bom ajuste para seus dados. Existem alguns pontos lógicos importantes e muito básicos a serem lembrados sobre esse processo. Em primeiro lugar, embora a maquinaria matemática necessária para realizar modelagens de equações estruturais seja extremamente complicada, a lógica básica é incorporada nos 5 passos acima. Abaixo, nós diagramamos o processo. Em segundo lugar, devemos lembrar que não é razoável esperar que um modelo estrutural se ajuste perfeitamente a uma série de razões. Um modelo estrutural com relações lineares é apenas uma aproximação. É improvável que o mundo seja linear. De fato, as verdadeiras relações entre variáveis ​​são provavelmente não-lineares. Além disso, muitas das suposições estatísticas são um tanto questionáveis ​​também. A questão real não é tanto, o modelo se encaixa perfeitamente, mas sim: Será que ele se encaixa bem o suficiente para ser uma aproximação útil para a realidade, e uma explicação razoável das tendências em nossos dados Terceiro, devemos lembrar que simplesmente porque um modelo se encaixa Os dados bem não significa que o modelo é necessariamente correto. Não se pode provar que um modelo é verdadeiro para afirmar isto é a falácia de afirmar o conseqüente. Por exemplo, poderíamos dizer Se Joe é um gato, Joe tem cabelo. No entanto, Joe tem cabelo não implica que Joe é um gato. Da mesma forma, podemos dizer que Se um certo modelo causal é verdadeiro, ele irá caber os dados. No entanto, o modelo de montagem dos dados não implica necessariamente que o modelo é o correto. Pode haver outro modelo que se ajuste aos dados igualmente bem. Modelagem de Equações Estruturais e Diagrama de Caminhos Os diagramas de caminhos têm um papel fundamental na modelagem estrutural. Os diagramas de caminho são como fluxogramas. Eles mostram variáveis ​​interconectadas com linhas que são usadas para indicar o fluxo causal. Pode-se pensar em um diagrama de caminho como um dispositivo para mostrar quais variáveis ​​causam mudanças em outras variáveis. No entanto, os diagramas de caminho não precisam ser pensados ​​estritamente dessa maneira. Eles também podem receber uma interpretação mais específica e mais específica. Considere a equação de regressão linear clássica Qualquer equação pode ser representada em um diagrama de trajetória da seguinte forma: Esses diagramas estabelecem um isomorfismo simples. Todas as variáveis ​​no sistema de equações são colocadas no diagrama, em caixas ou ovais. Cada equação é representada no diagrama da seguinte maneira: Todas as variáveis ​​independentes (as variáveis ​​no lado direito de uma equação) têm setas apontando para a variável dependente. O coeficiente de ponderação é colocado acima da seta. O diagrama acima mostra um sistema de equações lineares simples e sua representação de diagrama de caminho. Observe que, além de representar as relações de equações lineares com setas, os diagramas também contêm alguns aspectos adicionais. Primeiro, as variâncias das variáveis ​​independentes, que devemos conhecer para testar o modelo de relações estruturais, são mostradas nos diagramas usando linhas curvas sem ponta de seta. Nós nos referimos a linhas como fios. Em segundo lugar, algumas variáveis ​​são representadas em ovais, outras em caixas retangulares. Variáveis ​​Manifest são colocadas em caixas no diagrama de caminho. Variáveis ​​latentes são colocadas em um oval ou círculo. Por exemplo, a variável E no diagrama acima pode ser considerada como um resíduo de regressão linear quando Y é previsto a partir de X. Esse residual não é observado diretamente, mas calculado a partir de Y e X, portanto, tratamos-na como uma variável latente e Coloque-o em um oval. O exemplo discutido acima é extremamente simples. Geralmente, estamos interessados ​​em testar modelos que são muito mais complicados do que estes. À medida que os sistemas de equações que examinamos se tornam cada vez mais complicados, assim como as estruturas de covariância que eles implicam. Em última análise, a complexidade pode tornar-se tão desconcertante que perdemos de vista alguns princípios muito básicos. Para uma coisa o trem do raciocínio que suporta testar modelos causais com testes lineares das equações estruturais tem diversas ligações fracas. As variáveis ​​podem ser não-lineares. Eles podem estar linearmente relacionados por razões não relacionadas com o que comumente vemos como causalidade. O antigo ditado, a correlação não é causalidade permanece verdadeira, mesmo se a correlação é complexa e multivariada. O que a modelagem causal nos permite fazer é examinar até que ponto os dados não concordam com uma conseqüência razoavelmente viável de um modelo de causalidade. Se o sistema de equações lineares isomorfo ao diagrama de trajeto encaixa bem os dados, é encorajador, mas dificilmente prova da verdade do modelo causal. Embora os diagramas de trajeto possam ser usados ​​para representar o fluxo causal em um sistema de variáveis, eles não precisam implicar tal fluxo causal. Esses diagramas podem ser vistos como simplesmente uma representação isomórfica de um sistema de equações lineares. Como tal, eles podem transmitir relações lineares quando nenhuma relação causal é assumida. Assim, embora se possa interpretar o diagrama na figura acima como significando que X causa Y, o diagrama também pode ser interpretado como uma representação visual da relação de regressão linear entre X e Y. Foi este tópico útil. SurvivalFailure Análise de Tempo Informações Gerais Essas técnicas foram desenvolvidas principalmente nas ciências médicas e biológicas, mas também são amplamente utilizadas nas ciências sociais e econômicas, bem como na engenharia (confiabilidade e análise do tempo de falha). Imagine que você é um pesquisador em um hospital que está estudando a eficácia de um novo tratamento para uma doença geralmente terminal. A principal variável de interesse é o número de dias que os respectivos pacientes sobrevivem. Em princípio, pode-se usar as estatísticas paramétricas e não-paramétricas padrão para descrever a sobrevida média e comparar o novo tratamento com os métodos tradicionais (ver Estatísticas Básicas e Não-Paramétricas e Adaptação à Distribuição). No entanto, no final do estudo haverá pacientes que sobreviveram durante todo o período do estudo, em particular entre os pacientes que entraram no hospital (e no projeto de pesquisa) no final do estudo haverá outros pacientes com os quais teremos perdido contato. Certamente, não se quer excluir todos os pacientes do estudo, declarando-os faltar dados (uma vez que a maioria deles são sobreviventes e, portanto, eles refletem sobre o sucesso do novo método de tratamento). Essas observações, que contêm apenas informações parciais, são chamadas de observações censuradas (por exemplo, o paciente A sobreviveu pelo menos 4 meses antes de se mudar e perdemos contato com o termo censura foi usado pela primeira vez por Hald, 1949). Observações censuradas Em geral, observações censuradas surgem sempre que a variável dependente de interesse representa o tempo até um evento terminal ea duração do estudo é limitada no tempo. Observações censuradas podem ocorrer em várias áreas de pesquisa. Por exemplo, nas ciências sociais, podemos estudar a sobrevivência dos casamentos, as taxas de abandono escolar (tempo de abandono escolar), o volume de negócios nas organizações, etc. Em cada caso, ao final do período de estudo, Ainda estar casado, não ter deixado cair, ou ainda estão trabalhando na mesma empresa, portanto, esses assuntos representam observações censuradas. Em economia, podemos estudar a sobrevivência de novos negócios ou os tempos de sobrevivência de produtos como automóveis. Na pesquisa de controle de qualidade, é prática comum estudar a sobrevivência de peças sob estresse (análise do tempo de falha). Técnicas Analíticas Essencialmente, os métodos oferecidos na Análise de Sobrevivência abordam as mesmas questões de pesquisa como muitos dos outros procedimentos no entanto, todos os métodos na Análise de Sobrevivência lidarão com dados censurados. A tabela de vida, distribuição de sobrevivência. E a estimativa da função de sobrevivência de Kaplan-Meier são todos métodos descritivos para estimar a distribuição dos tempos de sobrevivência de uma amostra. Várias técnicas estão disponíveis para comparar a sobrevivência em dois ou mais grupos. Finalmente, a Survival Analysis oferece vários modelos de regressão para estimar a relação de variáveis ​​contínuas (múltiplas) com os tempos de sobrevivência. Análise da Tabela de Vida A maneira mais direta de descrever a sobrevivência em uma amostra é calcular a Tabela de Vida. A técnica da tabela de vida é um dos métodos mais antigos para analisar dados de sobrevivência (tempo de falha) (por exemplo, ver Berkson Gage, 1950 Cutler Ederer, 1958 Gehan, 1969). Esta tabela pode ser considerada como uma tabela de distribuição de frequência melhorada. A distribuição dos tempos de sobrevivência é dividida em um certo número de intervalos. Para cada intervalo podemos calcular o número ea proporção de casos ou objetos que entraram no respectivo intervalo vivo, o número ea proporção de casos que falharam no respectivo intervalo (ou seja, número de eventos terminais ou número de casos que morreram) e O número de casos que foram perdidos ou censurados no respectivo intervalo. Com base nesses números e proporções, várias estatísticas adicionais podem ser computadas: Número de Casos em Risco. Este é o número de casos que entraram no respectivo intervalo vivo, menos a metade do número de casos perdidos ou censurados no respectivo intervalo. Proporção Falha. Esta proporção é calculada como a relação entre o número de casos falhando no respectivo intervalo, dividido pelo número de casos em risco no intervalo. Proporção Sobrevivendo. Esta proporção é calculada como 1 menos a proporção que falha. Sobrevivência de Proporção Acumulada (Função de Sobrevivência). Esta é a proporção cumulativa de casos que sobrevivem até o respectivo intervalo. Uma vez que as probabilidades de sobrevivência são assumidas como independentes entre os intervalos, essa probabilidade é calculada multiplicando as probabilidades de sobrevivência em todos os intervalos anteriores. A função resultante também é chamada de sobrevivência ou função de sobrevivência. Densidade de probabilidade. Esta é a probabilidade estimada de falha no respectivo intervalo, calculada por unidade de tempo, ou seja: Nesta fórmula, F i é a respectiva densidade de probabilidade no i-ésimo intervalo, P i é a proporção cumulativa estimada que sobrevive no início do Ith intervalo (no final do intervalo i-1), P i1 é a proporção cumulativa que sobrevive no final do i-ésimo intervalo, e hi é a largura do respectivo intervalo. Taxa de Perigo. A taxa de risco (o termo foi usado pela primeira vez por Barlow, 1963) é definida como a probabilidade por unidade de tempo que um caso que tenha sobrevivido ao início do respectivo intervalo irá falhar nesse intervalo. Especificamente, é computado como o número de falhas por unidades de tempo no respectivo intervalo, dividido pelo número médio de casos sobreviventes no ponto médio do intervalo. Tempo médio de sobrevivência. Este é o tempo de sobrevivência em que a função de sobrevivência cumulativa é igual a 0,5. Outros percentis (percentil 25 e 75) da função de sobrevivência cumulativa podem ser calculados em conformidade. Note-se que o percentil 50 (mediana) para a função de sobrevivência cumulativa não é geralmente o mesmo que o ponto no tempo até que 50 da amostra sobreviveu. (Este seria apenas o caso se não existiam observações censuradas antes deste tempo). Tamanhos de Amostra Requeridos. Para obter estimativas confiáveis ​​das três principais funções (sobrevivência, densidade de probabilidade e perigo) e seus erros-padrão em cada intervalo de tempo, o tamanho mínimo recomendado da amostra é 30. Distribuição Introdução Geral Introdução. Em resumo, a tabela de vida nos dá uma boa indicação da distribuição de falhas ao longo do tempo. Contudo, para fins preditivos, é frequentemente desejável compreender a forma da função de sobrevivência subjacente na população. As distribuições principais que foram propostas para modelar os tempos de sobrevivência ou de falha são a distribuição exponencial (e linear exponencial), a distribuição de Weibull de eventos extremos e a distribuição de Gompertz. Estimativa. O procedimento de estimação de parâmetros (para estimar os parâmetros das funções teóricas de sobrevivência) é essencialmente um algoritmo de regressão linear de mínimos quadrados (ver Gehan Siddiqui, 1973). Um algoritmo de regressão linear pode ser usado porque todas as quatro distribuições teóricas podem ser feitas lineares por transformações apropriadas. Tais transformações, por vezes, produzem variações diferentes para os resíduos em momentos diferentes, levando a estimativas tendenciosas. Qualidade de ajuste. Dados os parâmetros para as diferentes funções de distribuição eo respectivo modelo, podemos calcular a probabilidade dos dados. Pode-se também calcular a probabilidade dos dados sob o modelo nulo, ou seja, um modelo que permite diferentes taxas de risco em cada intervalo. Sem entrar em detalhes, essas duas probabilidades podem ser comparadas através de uma estatística incremental do teste do qui-quadrado. Se este Qui-quadrado é estatisticamente significativo, então concluímos que a respectiva distribuição teórica se ajusta aos dados significativamente pior do que o modelo nulo, ou seja, rejeitamos a respectiva distribuição como modelo para os nossos dados. Parcelas Você pode produzir gráficos da função de sobrevivência, risco e densidade de probabilidade para os dados observados e as respectivas distribuições teóricas. Estas parcelas fornecem uma rápida verificação visual da bondade de ajuste da distribuição teórica. O gráfico abaixo mostra uma função de sobrevivência observada e a distribuição Weibull ajustada. Especificamente, as três linhas nesta trama denotam as distribuições teóricas que resultaram de três diferentes procedimentos de estimação (mínimos quadrados e dois métodos de mínimos quadrados ponderados). Estimador de Produto-Limite de Kaplan-Meier Em vez de classificar os tempos de sobrevivência observados em uma tabela de vida, podemos estimar a função de sobrevivência diretamente dos tempos de sobrevivência ou de falha contínua. Intuitivamente, imagine que criamos uma tabela de vida para que cada intervalo de tempo contenha exatamente um caso. Com base nessa equação, S (t) é a função de sobrevivência estimada, n é o número total de casos e denota a multiplicação (isto é, para cada observação) Soma geométrica) em todos os casos menor ou igual a t (j) é uma constante que é 1 se o j-ésimo caso não é censurado (completo) e 0 se for censurado. Esta estimativa da função de sobrevivência também é chamada de estimador de produto-limite. E foi proposto pela primeira vez por Kaplan e Meier (1958). Um gráfico de exemplo desta função é mostrado abaixo. A vantagem do método Kaplan-Meier Product-Limit sobre o método da tabela de vida para analisar os dados de tempo de sobrevivência e de falha é que as estimativas resultantes não dependem do agrupamento dos dados (em um certo número de intervalos de tempo). Na verdade, o método de Limite de Produto e o método de tabela de vida são idênticos se os intervalos da tabela de vida contiverem no máximo uma observação. Comparando Amostras Introdução Geral. Pode-se comparar os tempos de sobrevivência ou de falha em duas ou mais amostras. Em princípio, uma vez que os tempos de sobrevivência não são normalmente distribuídos, devem ser aplicados testes não paramétricos baseados na classificação dos tempos de sobrevivência. Uma ampla gama de testes não paramétricos pode ser usada para comparar os tempos de sobrevivência no entanto, os testes não podem lidar com observações censuradas. Testes disponíveis. Os seguintes cinco testes diferentes (principalmente não paramétricos) para dados censurados estão disponíveis: Gehans generalizado Wilcoxon teste, Cox-Mantel teste, Coxs F teste. O teste log-rank, e Peto e Petos generalizado Wilcoxon teste. Também está disponível um teste não paramétrico para a comparação de vários grupos. A maioria desses testes é acompanhada por valores z apropriados (valores da distribuição normal padrão), estes valores z podem ser usados ​​para testar a significância estatística de quaisquer diferenças entre os grupos. No entanto, note que a maioria destes testes só produzirão resultados confiáveis ​​com tamanhos de amostras bastante grandes, o pequeno comportamento da amostra é menos bem compreendido. Escolhendo um teste de duas amostras. Não existem orientações amplamente aceites relativamente ao teste a utilizar numa situação específica. O teste de Coxs F tende a ser mais poderoso do que o teste de Wilcoxon generalizado de Gehans quando: Os tamanhos das amostras são pequenos (ou seja, n por grupo menor que 50) Se as amostras são de uma exponencial ou Weibull Se não houver observações censuradas (ver Gehan Thomas, 1969). Lee, Desu e Gehan (1975) compararam o teste de Gehans com várias alternativas e mostraram que o teste de Cox-Mantel eo teste log-rank são mais poderosos (independentemente da censura) quando as amostras são extraídas de uma população que segue um teste exponencial ou Weibull nestas condições há pouca diferença entre o teste de Cox-Mantel eo teste log-rank. Lee (1980) discute o poder de diferentes testes em maior detalhe. Teste de Amostra Múltipla. Existe um teste de múltiplas amostras que é uma extensão (ou generalização) do teste de Wilcoxon generalizado de Gehans, do teste de Wilcoxon generalizado de Peto e Petos e do teste log-rank. Primeiro, uma pontuação é atribuída a cada tempo de sobrevivência usando o procedimento Mantels (Mantel, 1967), a seguir um valor de Chi - quadrado é calculado com base nas somas (para cada grupo) desse escore. Se apenas dois grupos forem especificados, então este teste é equivalente ao teste de Wilcoxon generalizado de Gehans, e os cálculos usarão como padrão esse teste neste caso. Proporções desiguais de dados censurados. Ao comparar dois ou mais grupos, é muito importante examinar o número de observações censuradas em cada grupo. Particularmente na investigação médica, a censura pode ser o resultado de, por exemplo, a aplicação de diferentes tratamentos: os doentes que melhoram mais rapidamente ou agravam como resultado de um tratamento podem ter mais probabilidade de abandonar o estudo, resultando em números diferentes De observações censuradas em cada grupo. Tal censura sistemática pode prejudicar significativamente os resultados das comparações. Modelos de Regressão Introdução geral Uma pesquisa comum em pesquisa médica, biológica ou de engenharia (tempo de falha) é determinar se certas variáveis ​​contínuas (independentes) estão correlacionadas com os tempos de sobrevivência ou de falha. Há duas razões principais pelas quais esta questão de pesquisa não pode ser abordada através de técnicas de regressão múltipla diretas (como disponível na Regressão Múltipla): Primeiro, a variável dependente de interesse (tempo de sobrevivência) é mais provável que não seja normalmente distribuída - uma violação grave de uma suposição Para regressão múltipla de mínimos quadrados ordinários. Os tempos de sobrevivência geralmente seguem uma distribuição exponencial ou Weibull. Em segundo lugar, há o problema da censura. Ou seja, algumas observações serão incompletas. Modelo de risco proporcional de Coxs O modelo de risco proporcional é o mais geral dos modelos de regressão porque não se baseia em quaisquer suposições relativas à natureza ou forma da distribuição de sobrevivência subjacente. O modelo assume que a taxa de risco subjacente (em vez do tempo de sobrevivência) é uma função das variáveis ​​independentes (covariáveis), não são feitas hipóteses sobre a natureza ou a forma da função de perigo. Assim, num certo sentido, o modelo de regressão de Coxs pode ser considerado um método não paramétrico. O modelo pode ser escrito como: onde h (t.) Denota o perigo resultante, dados os valores das m covariáveis ​​para o caso respectivo (z1. Z2.z m) eo respectivo tempo de sobrevivência (t). O termo h 0 (t) é chamado perigo de linha de base, é o risco para o indivíduo respectivo quando todos os valores de variáveis ​​independentes são iguais a zero. Podemos linearizar este modelo dividindo ambos os lados da equação por h 0 (t) e então tomando o logaritmo natural de ambos os lados: Temos agora um modelo linear bastante simples que pode ser prontamente estimado. Premissas. Embora não haja suposições sobre a forma da função de risco subjacente, as equações do modelo mostradas acima implicam duas suposições. Primeiro, eles especificam uma relação multiplicativa entre a função de risco subjacente ea função log-linear das covariáveis. Essa suposição também é chamada de suposição de proporcionalidade. Em termos práticos, presume-se que, dadas duas observações com valores diferentes para as variáveis ​​independentes, a razão das funções de perigo para essas duas observações não depende do tempo. A segunda suposição é que existe uma relação log-linear entre as variáveis ​​independentes ea função de risco subjacente. Modelo de risco proporcional de Coxs com covariáveis ​​dependentes do tempo Uma suposição do modelo de risco proporcional é que a função de perigo para um indivíduo (isto é, a observação na análise) depende dos valores das covariáveis ​​e do valor do risco de linha de base. Dado dois indivíduos com valores particulares para as covariáveis, a proporção dos riscos estimados ao longo do tempo será constante - daí o nome do método: o modelo de risco proporcional. A validade desse pressuposto pode muitas vezes ser questionável. Por exemplo, a idade é freqüentemente incluída nos estudos de saúde física. Suponha que você estudou sobrevida após a cirurgia. É provável que a idade seja um preditor de risco mais importante imediatamente após a cirurgia do que algum tempo após a cirurgia (após a recuperação inicial). Em ensaios de vida acelerados, por vezes, utiliza-se uma covariante de stress (por exemplo, a quantidade de tensão) que aumenta lentamente ao longo do tempo até ocorrer a falha (por exemplo, até o isolamento eléctrico não ver Lawless, 1982, página 393). Neste caso, o impacto da covariável é claramente dependente do tempo. O usuário pode especificar expressões aritméticas para definir covariáveis ​​como funções de várias variáveis ​​e tempo de sobrevivência. Testando a Assunção de Proporcionalidade. Conforme indicado pelos exemplos anteriores, existem muitas aplicações em que é provável que a hipótese de proporcionalidade não seja válida. Nesse caso, pode-se definir explicitamente as covariáveis ​​como funções do tempo. Por exemplo, a análise de um conjunto de dados apresentados por Pike (1966) consiste em tempos de sobrevivência para dois grupos de ratos que foram expostos a um carcinógeno (ver também Lawless, 1982, página 393, para um exemplo semelhante). Suponha que z seja uma variável de agrupamento com os códigos 1 e 0 para indicar se o respectivo rato foi ou não exposto. Pode-se então ajustar o modelo de risco proporcional: Assim, neste modelo o perigo condicional no tempo t é uma função de (1) o perigo de linha de base h 0. (2) a covariável z. E (3) de z vezes o logaritmo do tempo. Observe que a constante 5.4 é usada aqui apenas para propósitos de escala: a média do logaritmo dos tempos de sobrevivência neste conjunto de dados é igual a 5,4. Em outras palavras, o perigo condicional em cada ponto no tempo é uma função da covariável e tempo, portanto, o efeito da covariável sobre a sobrevivência é dependente do tempo, portanto, o nome tempo dependente covariável. Esse modelo permite testar especificamente a suposição de proporcionalidade. Se o parâmetro b 2 é estatisticamente significativo (por exemplo, se é pelo menos duas vezes maior do que seu erro padrão), então pode-se concluir que, de fato, o efeito da covariável z na sobrevivência é dependente do tempo e, portanto, que o Pressuposto de proporcionalidade não é válido. Regressão Exponencial Basicamente, este modelo assume que a distribuição do tempo de sobrevivência é exponencial e depende dos valores de um conjunto de variáveis ​​independentes (z i). O parâmetro de taxa da distribuição exponencial pode então ser expresso como: S (z) denota os tempos de sobrevivência, a é uma constante, e os b i s são os parâmetros de regressão. Qualidade de ajuste. O valor de bondade de ajuste do Qui-quadrado é calculado como uma função da log-verossimilhança para o modelo com todas as estimativas de parâmetros (L 1) e a log-verossimilhança do modelo em que todas as covariáveis ​​são forçadas a 0 (zero L 0). Se este valor do Qui-quadrado for significativo, rejeitamos a hipótese nula e assumimos que as variáveis ​​independentes estão significativamente relacionadas aos tempos de sobrevivência. Estatística de ordem exponencial padrão. Uma maneira de verificar a hipótese de exponencialidade deste modelo é traçar os tempos de sobrevivência residual contra a estatística de ordem exponencial padrão theta. Se a suposição de exponencialidade for satisfeita, então todos os pontos neste gráfico serão dispostos aproximadamente em uma linha reta. Regressão Normal e Log-Normal Neste modelo, assume-se que os tempos de sobrevivência (ou tempos de sobrevivência de log) vêm de uma distribuição normal o modelo resultante é basicamente idêntico ao modelo de regressão múltipla ordinária e pode ser declarado como: onde t denota Os tempos de sobrevivência. Para a regressão log-normal, t é substituído pelo seu logaritmo natural. O modelo de regressão normal é particularmente útil porque muitos conjuntos de dados podem ser transformados para produzir aproximações da distribuição normal. Assim, em certo sentido este é o modelo totalmente paramétrico mais geral (ao contrário do modelo de risco proporcional de Coxs que não é paramétrico), e estimativas podem ser obtidas para uma variedade de diferentes distribuições de sobrevivência subjacentes. Qualidade de ajuste. O valor do Qui-quadrado é calculado como uma função da log-verossimilhança para o modelo com todas as variáveis ​​independentes (L1) e a log-verossimilhança do modelo em que todas as variáveis ​​independentes são forçadas a 0 (zero, L0). Análises estratificadas O objetivo de uma análise estratificada é testar a hipótese de modelos de regressão idênticos serem apropriados para diferentes grupos, ou seja, se as relações entre as variáveis ​​independentes ea sobrevivência são idênticas em diferentes grupos. Para realizar uma análise estratificada, é necessário primeiro ajustar o modelo de regressão respectivo separadamente dentro de cada grupo. A soma das probabilidades de log destas análises representa a probabilidade logarítmica do modelo com diferentes coeficientes de regressão (e interceptações quando apropriado) em diferentes grupos. O próximo passo é ajustar o modelo de regressão solicitado a todos os dados da maneira usual (isto é, ignorar a associação ao grupo) e calcular a probabilidade de log para o ajuste global. A diferença entre a log-verossimilhança pode então ser testada quanto à significância estatística (através da estatística Qui-quadrado). Este tópico foi útil. Mineração de Texto (Dados Grandes, Dados Não Estruturados) Mineração de Texto Introdução Visão Geral A finalidade do Text Mining é processar informações não-estruturadas (textuais), extrair índices numéricos significativos do texto e, assim, tornar as informações contidas no texto acessíveis aos diversos Algoritmos de mineração de dados (estatística e aprendizado de máquina). As informações podem ser extraídas para obter resumos para as palavras contidas nos documentos ou para computar sumários para os documentos com base nas palavras neles contidas. Hence, you can analyze words, clusters of words used in documents, etc. or you could analyze documents and determine similarities between them or how they are related to other variables of interest in the data mining project. In the most general terms, text mining will turn text into numbers (meaningful indices), which can then be incorporated in other analyses such as predictive data mining projects, the application of unsupervised learning methods (clustering), etc. These methods are described and discussed in great detail in the comprehensive overview work by Manning and Schtze (2002), and for an in-depth treatment of these and related topics as well as the history of this approach to text mining, we highly recommend that source. Typical Applications for Text Mining Unstructured text is very common, and in fact may represent the majority of information available to a particular research or data mining project. Analyzing open-ended survey responses. In survey research (e. g. marketing), it is not uncommon to include various open-ended questions pertaining to the topic under investigation. The idea is to permit respondents to express their views or opinions without constraining them to particular dimensions or a particular response format. This may yield insights into customers views and opinions that might otherwise not be discovered when relying solely on structured questionnaires designed by experts. For example, you may discover a certain set of words or terms that are commonly used by respondents to describe the pros and cons of a product or service (under investigation), suggesting common misconceptions or confusion regarding the items in the study. Automatic processing of messages, emails, etc. Another common application for text mining is to aid in the automatic classification of texts. For example, it is possible to filter out automatically most undesirable junk email based on certain terms or words that are not likely to appear in legitimate messages, but instead identify undesirable electronic mail. In this manner, such messages can automatically be discarded. Such automatic systems for classifying electronic messages can also be useful in applications where messages need to be routed (automatically) to the most appropriate department or agency e. g. email messages with complaints or petitions to a municipal authority are automatically routed to the appropriate departments at the same time, the emails are screened for inappropriate or obscene messages, which are automatically returned to the sender with a request to remove the offending words or content. Analyzing warranty or insurance claims, diagnostic interviews, etc. In some business domains, the majority of information is collected in open-ended, textual form. For example, warranty claims or initial medical (patient) interviews can be summarized in brief narratives, or when you take your automobile to a service station for repairs, typically, the attendant will write some notes about the problems that you report and what you believe needs to be fixed. Increasingly, those notes are collected electronically, so those types of narratives are readily available for input into text mining algorithms. This information can then be usefully exploited to, for example, identify common clusters of problems and complaints on certain automobiles, etc. Likewise, in the medical field, open-ended descriptions by patients of their own symptoms might yield useful clues for the actual medical diagnosis. Investigating competitors by crawling their web sites. Another type of potentially very useful application is to automatically process the contents of Web pages in a particular domain. For example, you could go to a Web page, and begin crawling the links you find there to process all Web pages that are referenced. In this manner, you could automatically derive a list of terms and documents available at that site, and hence quickly determine the most important terms and features that are described. It is easy to see how these capabilities could efficiently deliver valuable business intelligence about the activities of competitors. Approaches to Text Mining To reiterate, text mining can be summarized as a process of numericizing text. At the simplest level, all words found in the input documents will be indexed and counted in order to compute a table of documents and words, i. e. a matrix of frequencies that enumerates the number of times that each word occurs in each document. This basic process can be further refined to exclude certain common words such as the and a (stop word lists) and to combine different grammatical forms of the same words such as traveling, traveled, travel, etc. (stemming ). However, once a table of (unique) words (terms) by documents has been derived, all standard statistical and data mining techniques can be applied to derive dimensions or clusters of words or documents, or to identify important words or terms that best predict another outcome variable of interest. Using well-tested methods and understanding the results of text mining. Once a data matrix has been computed from the input documents and words found in those documents, various well-known analytic techniques can be used for further processing those data including methods for clustering, factoring, or predictive data mining (see, for example, Manning and Schtze, 2002). Black-box approaches to text mining and extraction of concepts. There are text mining applications which offer black-box methods to extract deep meaning from documents with little human effort (to first read and understand those documents). These text mining applications rely on proprietary algorithms for presumably extracting concepts from text, and may even claim to be able to summarize large numbers of text documents automatically, retaining the core and most important meaning of those documents. While there are numerous algorithmic approaches to extracting meaning from documents, this type of technology is very much still in its infancy, and the aspiration to provide meaningful automated summaries of large numbers of documents may forever remain elusive. We urge skepticism when using such algorithms because 1) if it is not clear to the user how those algorithms work, it cannot possibly be clear how to interpret the results of those algorithms, and 2) the methods used in those programs are not open to scrutiny, for example by the academic community and peer review and, hence, we simply dont know how well they might perform in different domains. As a final thought on this subject, you may consider this concrete example: Try the various automated translation services available via the Web that can translate entire paragraphs of text from one language into another. Then translate some text, even simple text, from your native language to some other language and back, and review the results. Almost every time, the attempt to translate even short sentences to other languages and back while retaining the original meaning of the sentence produces humorous rather than accurate results. This illustrates the difficulty of automatically interpreting the meaning of text. Text mining as document search. There is another type of application that is often described and referred to as text mining - the automatic search of large numbers of documents based on key words or key phrases. This is the domain of, for example, the popular internet search engines that have been developed over the last decade to provide efficient access to Web pages with certain content. While this is obviously an important type of application with many uses in any organization that needs to search very large document repositories based on varying criteria, it is very different from what has been described here. Issues and Considerations for Numericizing Text Large numbers of small documents vs. small numbers of large documents. Examples of scenarios using large numbers of small or moderate sized documents were given earlier (e. g. analyzing warranty or insurance claims, diagnostic interviews, etc.). On the other hand, if your intent is to extract concepts from only a few documents that are very large (e. g. two lengthy books), then statistical analyses are generally less powerful because the number of cases (documents) in this case is very small while the number of variables (extracted words) is very large. Excluding certain characters, short words, numbers, etc. Excluding numbers, certain characters, or sequences of characters, or words that are shorter or longer than a certain number of letters can be done before the indexing of the input documents starts. You may also want to exclude rare words, defined as those that only occur in a small percentage of the processed documents. Include lists, exclude lists (stop-words). Specific list of words to be indexed can be defined this is useful when you want to search explicitly for particular words, and classify the input documents based on the frequencies with which those words occur. Also, stop-words, i. e. terms that are to be excluded from the indexing can be defined. Typically, a default list of English stop words includes the, a, of, since, etc, i. e. words that are used in the respective language very frequently, but communicate very little unique information about the contents of the document. Synonyms and phrases. Synonyms, such as sick or ill, or words that are used in particular phrases where they denote unique meaning can be combined for indexing. For example, Microsoft Windows might be such a phrase, which is a specific reference to the computer operating system, but has nothing to do with the common use of the term Windows as it might, for example, be used in descriptions of home improvement projects. Stemming algorithms. An important pre-processing step before indexing of input documents begins is the stemming of words. The term stemming refers to the reduction of words to their roots so that, for example, different grammatical forms or declinations of verbs are identified and indexed (counted) as the same word. For example, stemming will ensure that both traveling and traveled will be recognized by the text mining program as the same word. Support for different languages. Stemming, synonyms, the letters that are permitted in words, etc. are highly language dependent operations. Therefore, support for different languages is important. Transforming Word Frequencies Once the input documents have been indexed and the initial word frequencies (by document) computed, a number of additional transformations can be performed to summarize and aggregate the information that was extracted. Log-frequencies. First, various transformations of the frequency counts can be performed. The raw word or term frequencies generally reflect on how salient or important a word is in each document. Specifically, words that occur with greater frequency in a document are better descriptors of the contents of that document. However, it is not reasonable to assume that the word counts themselves are proportional to their importance as descriptors of the documents. For example, if a word occurs 1 time in document A, but 3 times in document B, then it is not necessarily reasonable to conclude that this word is 3 times as important a descriptor of document B as compared to document A. Thus, a common transformation of the raw word frequency counts (wf) is to compute: f(wf) 1 log(wf), for wf 0 This transformation will dampen the raw frequencies and how they will affect the results of subsequent computations. Binary frequencies. Likewise, an even simpler transformation can be used that enumerates whether a term is used in a document i. e.: f(wf) 1, for wf 0 The resulting documents-by-words matrix will contain only 1s and 0s to indicate the presence or absence of the respective words. Again, this transformation will dampen the effect of the raw frequency counts on subsequent computations and analyses. Inverse document frequencies. Another issue that you may want to consider more carefully and reflect in the indices used in further analyses are the relative document frequencies (df) of different words. For example, a term such as guess may occur frequently in all documents, while another term such as software may only occur in a few. The reason is that we might make guesses in various contexts, regardless of the specific topic, while software is a more semantically focused term that is only likely to occur in documents that deal with computer software. A common and very useful transformation that reflects both the specificity of words (document frequencies) as well as the overall frequencies of their occurrences (word frequencies) is the so-called inverse document frequency (for the ith word and jth document): In this formula (see also formula 15.5 in Manning and Schtze, 2002), N is the total number of documents, and dfi is the document frequency for the i th word (the number of documents that include this word). Hence, it can be seen that this formula includes both the dampening of the simple word frequencies via the log function (described above), and also includes a weighting factor that evaluates to 0 if the word occurs in all documents (log(NN1)0) . and to the maximum value when a word only occurs in a single document (log(N1)log(N)) . It can easily be seen how this transformation will create indices that both reflect the relative frequencies of occurrences of words, as well as their semantic specificities over the documents included in the analysis. Latent Semantic Indexing via Singular Value Decomposition As described above, the most basic result of the initial indexing of words found in the input documents is a frequency table with simple counts, i. e. the number of times that different words occur in each input document. Usually, we would transform those raw counts to indices that better reflect the (relative) importance of words andor their semantic specificity in the context of the set of input documents (see the discussion of inverse document frequencies, above). A common analytic tool for interpreting the meaning or semantic space described by the words that were extracted, and hence by the documents that were analyzed, is to create a mapping of the word and documents into a common space, computed from the word frequencies or transformed word frequencies (e. g. inverse document frequencies). In general, here is how it works: Suppose you indexed a collection of customer reviews of their new automobiles (e. g. for different makes and models). You may find that every time a review includes the word gas-mileage, it also includes the term economy. Further, when reports include the word reliability they also include the term defects (e. g. make reference to no defects). However, there is no consistent pattern regarding the use of the terms economy and reliability, i. e. some documents include either one or both. In other words, these four words gas-mileage and economy, and reliability and defects, describe two independent dimensions - the first having to do with the overall operating cost of the vehicle, the other with the quality and workmanship. The idea of latent semantic indexing is to identify such underlying dimensions (of meaning), into which the words and documents can be mapped. As a result, we may identify the underlying (latent) themes described or discussed in the input documents, and also identify the documents that mostly deal with economy, reliability, or both. Hence, we want to map the extracted words or terms and input documents into a common latent semantic space. Singular value decomposition. The use of singular value decomposition in order to extract a common space for the variables and cases (observations) is used in various statistical techniques, most notably in Correspondence Analysis . The technique is also closely related to Principal Components Analysis and Factor Analysis . In general, the purpose of this technique is to reduce the overall dimensionality of the input matrix (number of input documents by number of extracted words) to a lower-dimensional space, where each consecutive dimension represents the largest degree of variability (between words and documents) possible. Ideally, you might identify the two or three most salient dimensions, accounting for most of the variability (differences) between the words and documents and, hence, identify the latent semantic space that organizes the words and documents in the analysis. In some way, once such dimensions can be identified, you have extracted the underlying meaning of what is contained (discussed, described) in the documents. Incorporating Text Mining Results in Data Mining Projects After significant (e. g. frequent) words have been extracted from a set of input documents, andor after singular value decomposition has been applied to extract salient semantic dimensions, typically the next and most important step is to use the extracted information in a data mining project. Graphics (visual data mining methods). Depending on the purpose of the analyses, in some instances the extraction of semantic dimensions alone can be a useful outcome if it clarifies the underlying structure of what is contained in the input documents. For example, a study of new car owners comments about their vehicles may uncover the salient dimensions in the minds of those drivers when they think about or consider their automobile (or how they feel about it). For marketing research purposes, that in itself can be a useful and significant result. You can use the graphics (e. g. 2D scatterplots or 3D scatterplots ) to help you visualize and identify the semantic space extracted from the input documents. Clustering and factoring. You can use cluster analysis methods to identify groups of documents (e. g. vehicle owners who described their new cars), to identify groups of similar input texts. This type of analysis also could be extremely useful in the context of market research studies, for example of new car owners. You can also use Factor Analysis and Principal Components and Classification Analysis (to factor analyze words or documents). Predictive data mining. Another possibility is to use the raw or transformed word counts as predictor variables in predictive data mining projects. Was this topic helpful Thank you. We appreciate your feedback Time Series Analysis How To Identify Patterns in Time Series Data: Time Series Analysis In the following topics, we will first review techniques used to identify patterns in time series data (such as smoothing and curve fitting techniques and autocorrelations), then we will introduce a general class of models that can be used to represent time series data and generate predictions (autoregressive and moving average models). Finally, we will review some simple but commonly used modeling and forecasting techniques based on linear regression. For more information see the topics below. General Introduction In the following topics, we will review techniques that are useful for analyzing time series data, that is, sequences of measurements that follow non-random orders. Unlike the analyses of random samples of observations that are discussed in the context of most other statistics, the analysis of time series is based on the assumption that successive values in the data file represent consecutive measurements taken at equally spaced time intervals. Detailed discussions of the methods described in this section can be found in Anderson (1976), Box and Jenkins (1976), Kendall (1984), Kendall and Ord (1990), Montgomery, Johnson, and Gardiner (1990), Pankratz (1983), Shumway (1988), Vandaele (1983), Walker (1991), and Wei (1989). Two Main Goals There are two main goals of time series analysis: (a) identifying the nature of the phenomenon represented by the sequence of observations, and (b) forecasting (predicting future values of the time series variable). Both of these goals require that the pattern of observed time series data is identified and more or less formally described. Once the pattern is established, we can interpret and integrate it with other data (i. e. use it in our theory of the investigated phenomenon, e. g. seasonal commodity prices). Regardless of the depth of our understanding and the validity of our interpretation (theory) of the phenomenon, we can extrapolate the identified pattern to predict future events. Identifying Patterns in Time Series Data For more information on simple autocorrelations (introduced in this section) and other auto correlations, see Anderson (1976), Box and Jenkins (1976), Kendall (1984), Pankratz (1983), and Vandaele (1983). See also: Systematic Pattern and Random Noise As in most other analyses, in time series analysis it is assumed that the data consist of a systematic pattern (usually a set of identifiable components) and random noise (error) which usually makes the pattern difficult to identify. Most time series analysis techniques involve some form of filtering out noise in order to make the pattern more salient. Two General Aspects of Time Series Patterns Most time series patterns can be described in terms of two basic classes of components: trend and seasonality. The former represents a general systematic linear or (most often) nonlinear component that changes over time and does not repeat or at least does not repeat within the time range captured by our data (e. g. a plateau followed by a period of exponential growth). The latter may have a formally similar nature (e. g. a plateau followed by a period of exponential growth), however, it repeats itself in systematic intervals over time. Those two general classes of time series components may coexist in real-life data. For example, sales of a company can rapidly grow over years but they still follow consistent seasonal patterns (e. g. as much as 25 of yearly sales each year are made in December, whereas only 4 in August). This general pattern is well illustrated in a classic Series G data set (Box and Jenkins, 1976, p. 531) representing monthly international airline passenger totals (measured in thousands) in twelve consecutive years from 1949 to 1960 (see example data file G. sta and graph above). If you plot the successive observations (months) of airline passenger totals, a clear, almost linear trend emerges, indicating that the airline industry enjoyed a steady growth over the years (approximately 4 times more passengers traveled in 1960 than in 1949). At the same time, the monthly figures will follow an almost identical pattern each year (e. g. more people travel during holidays than during any other time of the year). This example data file also illustrates a very common general type of pattern in time series data, where the amplitude of the seasonal changes increases with the overall trend (i. e. the variance is correlated with the mean over the segments of the series). This pattern which is called multiplicative seasonality indicates that the relative amplitude of seasonal changes is constant over time, thus it is related to the trend. Trend Analysis There are no proven automatic techniques to identify trend components in the time series data however, as long as the trend is monotonous (consistently increasing or decreasing) that part of data analysis is typically not very difficult. If the time series data contain considerable error, then the first step in the process of trend identification is smoothing. Smoothing. Smoothing always involves some form of local averaging of data such that the nonsystematic components of individual observations cancel each other out. The most common technique is moving average smoothing which replaces each element of the series by either the simple or weighted average of n surrounding elements, where n is the width of the smoothing window (see Box Jenkins, 1976 Velleman Hoaglin, 1981). Medians can be used instead of means. The main advantage of median as compared to moving average smoothing is that its results are less biased by outliers (within the smoothing window). Thus, if there are outliers in the data (e. g. due to measurement errors), median smoothing typically produces smoother or at least more reliable curves than moving average based on the same window width. The main disadvantage of median smoothing is that in the absence of clear outliers it may produce more jagged curves than moving average and it does not allow for weighting. In the relatively less common cases (in time series data), when the measurement error is very large, the distance weighted least squares smoothing or negative exponentially weighted smoothing techniques can be used. All those methods will filter out the noise and convert the data into a smooth curve that is relatively unbiased by outliers (see the respective sections on each of those methods for more details). Series with relatively few and systematically distributed points can be smoothed with bicubic splines . Fitting a function. Many monotonous time series data can be adequately approximated by a linear function if there is a clear monotonous nonlinear component, the data first need to be transformed to remove the nonlinearity. Usually a logarithmic, exponential, or (less often) polynomial function can be used. Analysis of Seasonality Seasonal dependency (seasonality) is another general component of the time series pattern. The concept was illustrated in the example of the airline passengers data above. It is formally defined as correlational dependency of order k between each i th element of the series and the ( i-k )th element (Kendall, 1976) and measured by autocorrelation (i. e. a correlation between the two terms) k is usually called the lag . If the measurement error is not too large, seasonality can be visually identified in the series as a pattern that repeats every k elements. Autocorrelation correlogram. Seasonal patterns of time series can be examined via correlograms. The correlogram (autocorrelogram) displays graphically and numerically the autocorrelation function ( ACF ), that is, serial correlation coefficients (and their standard errors) for consecutive lags in a specified range of lags (e. g. 1 through 30). Ranges of two standard errors for each lag are usually marked in correlograms but typically the size of auto correlation is of more interest than its reliability (see Elementary Concepts ) because we are usually interested only in very strong (and thus highly significant) autocorrelations. Examining correlograms. While examining correlograms, you should keep in mind that autocorrelations for consecutive lags are formally dependent. Consider the following example. If the first element is closely related to the second, and the second to the third, then the first element must also be somewhat related to the third one, etc. This implies that the pattern of serial dependencies can change considerably after removing the first order auto correlation (i. e. after differencing the series with a lag of 1). Partial autocorrelations. Another useful method to examine serial dependencies is to examine the partial autocorrelation function ( PACF ) - an extension of autocorrelation, where the dependence on the intermediate elements (those within the lag) is removed. In other words the partial autocorrelation is similar to autocorrelation, except that when calculating it, the (auto) correlations with all the elements within the lag are partialled out (Box Jenkins, 1976 see also McDowall, McCleary, Meidinger, Hay, 1980). If a lag of 1 is specified (i. e. there are no intermediate elements within the lag), then the partial autocorrelation is equivalent to auto correlation. In a sense, the partial autocorrelation provides a cleaner picture of serial dependencies for individual lags (not confounded by other serial dependencies). Removing serial dependency. Serial dependency for a particular lag of k can be removed by differencing the series, that is converting each i th element of the series into its difference from the ( i-k )th element. There are two major reasons for such transformations. First, we can identify the hidden nature of seasonal dependencies in the series. Remember that, as mentioned in the previous paragraph, autocorrelations for consecutive lags are interdependent. Therefore, removing some of the autocorrelations will change other auto correlations, that is, it may eliminate them or it may make some other seasonalities more apparent. The other reason for removing seasonal dependencies is to make the series stationary which is necessary for ARIMA and other techniques. For more information on Time Series methods, see also: General Introduction The modeling and forecasting procedures discussed in Identifying Patterns in Time Series Data involved knowledge about the mathematical model of the process. However, in real-life research and practice, patterns of the data are unclear, individual observations involve considerable error, and we still need not only to uncover the hidden patterns in the data but also generate forecasts. The ARIMA methodology developed by Box and Jenkins (1976) allows us to do just that it has gained enormous popularity in many areas and research practice confirms its power and flexibility (Hoff, 1983 Pankratz, 1983 Vandaele, 1983). However, because of its power and flexibility, ARIMA is a complex technique it is not easy to use, it requires a great deal of experience, and although it often produces satisfactory results, those results depend on the researchers level of expertise (Bails Peppers, 1982). The following sections will introduce the basic ideas of this methodology. For those interested in a brief, applications-oriented (non - mathematical), introduction to ARIMA methods, we recommend McDowall, McCleary, Meidinger, and Hay (1980). Two Common Processes Autoregressive process. Most time series consist of elements that are serially dependent in the sense that you can estimate a coefficient or a set of coefficients that describe consecutive elements of the series from specific, time-lagged (previous) elements. This can be summarized in the equation: x t 1 x (t-1) 2 x (t-2) 3 x (t-3) . is a constant (intercept), and 1 . 2. 3 are the autoregressive model parameters. Put into words, each observation is made up of a random error component (random shock, ) and a linear combination of prior observations. Stationarity requirement. Note that an autoregressive process will only be stable if the parameters are within a certain range for example, if there is only one autoregressive parameter then is must fall within the interval of -1 x t s would move towards infinity, that is, the series would not be stationary. If there is more than one autoregressive parameter, similar (general) restrictions on the parameter values can be defined (e. g. see Box Jenkins, 1976 Montgomery, 1990). Moving average process. Independent from the autoregressive process, each element in the series can also be affected by the past error (or random shock) that cannot be accounted for by the autoregressive component, that is: Where: is a constant, and 1 . 2. 3 are the moving average model parameters. Put into words, each observation is made up of a random error component (random shock, ) and a linear combination of prior random shocks. Invertibility requirement. Without going into too much detail, there is a duality between the moving average process and the autoregressive process (e. g. see Box Jenkins, 1976 Montgomery, Johnson, Gardiner, 1990), that is, the moving average equation above can be rewritten ( inverted ) into an autoregressive form (of infinite order). However, analogous to the stationarity condition described above, this can only be done if the moving average parameters follow certain conditions, that is, if the model is invertible . Otherwise, the series will not be stationary. ARIMA Methodology Autoregressive moving average model. The general model introduced by Box and Jenkins (1976) includes autoregressive as well as moving average parameters, and explicitly includes differencing in the formulation of the model. Specifically, the three types of parameters in the model are: the autoregressive parameters ( p ), the number of differencing passes ( d ), and moving average parameters ( q ). In the notation introduced by Box and Jenkins, models are summarized as ARIMA ( p, d, q ) so, for example, a model described as (0, 1, 2) means that it contains 0 (zero) autoregressive ( p ) parameters and 2 moving average ( q ) parameters which were computed for the series after it was differenced once. Identification. As mentioned earlier, the input series for ARIMA needs to be stationary. that is, it should have a constant mean, variance, and autocorrelation through time. Therefore, usually the series first needs to be differenced until it is stationary (this also often requires log transforming the data to stabilize the variance). The number of times the series needs to be differenced to achieve stationarity is reflected in the d parameter (see the previous paragraph). In order to determine the necessary level of differencing, you should examine the plot of the data and autocorrelogram. Significant changes in level (strong upward or downward changes) usually require first order non seasonal (lag1) differencing strong changes of slope usually require second order non seasonal differencing. Seasonal patterns require respective seasonal differencing (see below). If the estimated autocorrelation coefficients decline slowly at longer lags, first order differencing is usually needed. However, you should keep in mind that some time series may require little or no differencing, and that over differenced series produce less stable coefficient estimates. At this stage (which is usually called Identification phase, see below) we also need to decide how many autoregressive ( p ) and moving average ( q ) parameters are necessary to yield an effective but still parsimonious model of the process ( parsimonious means that it has the fewest parameters and greatest number of degrees of freedom among all models that fit the data). In practice, the numbers of the p or q parameters very rarely need to be greater than 2 (see below for more specific recommendations). Estimation and Forecasting. At the next step ( Estimation ), the parameters are estimated (using function minimization procedures, see below for more information on minimization procedures see also Nonlinear Estimation ), so that the sum of squared residuals is minimized. The estimates of the parameters are used in the last stage ( Forecasting ) to calculate new values of the series (beyond those included in the input data set) and confidence intervals for those predicted values. The estimation process is performed on transformed (differenced) data before the forecasts are generated, the series needs to be integrated (integration is the inverse of differencing) so that the forecasts are expressed in values compatible with the input data. This automatic integration feature is represented by the letter I in the name of the methodology (ARIMA Auto-Regressive Integrated Moving Average). The constant in ARIMA models. In addition to the standard autoregressive and moving average parameters, ARIMA models may also include a constant, as described above. The interpretation of a (statistically significant) constant depends on the model that is fit. Specifically, (1) if there are no autoregressive parameters in the model, then the expected value of the constant is , the mean of the series (2) if there are autoregressive parameters in the series, then the constant represents the intercept. If the series is differenced, then the constant represents the mean or intercept of the differenced series For example, if the series is differenced once, and there are no autoregressive parameters in the model, then the constant represents the mean of the differenced series, and therefore the linear trend slope of the un-differenced series. Identification Phase Number of parameters to be estimated. Before the estimation can begin, we need to decide on (identify) the specific number and type of ARIMA parameters to be estimated. The major tools used in the identification phase are plots of the series, correlograms of auto correlation (ACF), and partial autocorrelation (PACF). The decision is not straightforward and in less typical cases requires not only experience but also a good deal of experimentation with alternative models (as well as the technical parameters of ARIMA). However, a majority of empirical time series patterns can be sufficiently approximated using one of the 5 basic models that can be identified based on the shape of the autocorrelogram (ACF) and partial auto correlogram (PACF). The following brief summary is based on practical recommendations of Pankratz (1983) for additional practical advice, see also Hoff (1983), McCleary and Hay (1980), McDowall, McCleary, Meidinger, and Hay (1980), and Vandaele (1983). Also, note that since the number of parameters (to be estimated) of each kind is almost never greater than 2, it is often practical to try alternative models on the same data. One autoregressive (p) parameter . ACF - exponential decay PACF - spike at lag 1, no correlation for other lags. Two autoregressive (p) parameters . ACF - a sine-wave shape pattern or a set of exponential decays PACF - spikes at lags 1 and 2, no correlation for other lags. One moving average (q) parameter . ACF - spike at lag 1, no correlation for other lags PACF - damps out exponentially. Two moving average (q) parameters . ACF - spikes at lags 1 and 2, no correlation for other lags PACF - a sine-wave shape pattern or a set of exponential decays. One autoregressive (p) and one moving average (q) parameter . ACF - exponential decay starting at lag 1 PACF - exponential decay starting at lag 1. Seasonal models. Multiplicative seasonal ARIMA is a generalization and extension of the method introduced in the previous paragraphs to series in which a pattern repeats seasonally over time. In addition to the non-seasonal parameters, seasonal parameters for a specified lag (established in the identification phase) need to be estimated. Analogous to the simple ARIMA parameters, these are: seasonal autoregressive ( ps ), seasonal differencing ( ds ), and seasonal moving average parameters ( qs ). For example, the model ( 0,1,2 )( 0,1,1 ) describes a model that includes no autoregressive parameters, 2 regular moving average parameters and 1 seasonal moving average parameter, and these parameters were computed for the series after it was differenced once with lag 1, and once seasonally differenced. The seasonal lag used for the seasonal parameters is usually determined during the identification phase and must be explicitly specified. The general recommendations concerning the selection of parameters to be estimated (based on ACF and PACF) also apply to seasonal models. The main difference is that in seasonal series, ACF and PACF will show sizable coefficients at multiples of the seasonal lag (in addition to their overall patterns reflecting the non seasonal components of the series). Parameter Estimation There are several different methods for estimating the parameters. All of them should produce very similar estimates, but may be more or less efficient for any given model. In general, during the parameter estimation phase a function minimization algorithm is used (the so-called quasi-Newton method refer to the description of the Nonlinear Estimation method) to maximize the likelihood (probability) of the observed series, given the parameter values. In practice, this requires the calculation of the (conditional) sums of squares (SS) of the residuals, given the respective parameters. Different methods have been proposed to compute the SS for the residuals: (1) the approximate maximum likelihood method according to McLeod and Sales (1983), (2) the approximate maximum likelihood method with backcasting, and (3) the exact maximum likelihood method according to Melard (1984). Comparison of methods. In general, all methods should yield very similar parameter estimates. Also, all methods are about equally efficient in most real-world time series applications. However, method 1 above, (approximate maximum likelihood, no backcasts) is the fastest, and should be used in particular for very long time series (e. g. with more than 30,000 observations). Melards exact maximum likelihood method (number 3 above) may also become inefficient when used to estimate parameters for seasonal models with long seasonal lags (e. g. with yearly lags of 365 days). On the other hand, you should always use the approximate maximum likelihood method first in order to establish initial parameter estimates that are very close to the actual final values thus, usually only a few iterations with the exact maximum likelihood method ( 3 . above) are necessary to finalize the parameter estimates. Parameter standard errors. For all parameter estimates, you will compute so-called asymptotic standard errors . These are computed from the matrix of second-order partial derivatives that is approximated via finite differencing (see also the respective discussion in Nonlinear Estimation ). Penalty value. As mentioned above, the estimation procedure requires that the (conditional) sums of squares of the ARIMA residuals be minimized. If the model is inappropriate, it may happen during the iterative estimation process that the parameter estimates become very large, and, in fact, invalid. In that case, it will assign a very large value (a so-called penalty value ) to the SS. This usually entices the iteration process to move the parameters away from invalid ranges. However, in some cases even this strategy fails, and you may see on the screen (during the Estimation procedure ) very large values for the SS in consecutive iterations. In that case, carefully evaluate the appropriateness of your model. If your model contains many parameters, and perhaps an intervention component (see below), you may try again with different parameter start values. Evaluation of the Model Parameter estimates. You will report approximate t values, computed from the parameter standard errors (see above). If not significant, the respective parameter can in most cases be dropped from the model without affecting substantially the overall fit of the model. Other quality criteria. Another straightforward and common measure of the reliability of the model is the accuracy of its forecasts generated based on partial data so that the forecasts can be compared with known (original) observations. However, a good model should not only provide sufficiently accurate forecasts, it should also be parsimonious and produce statistically independent residuals that contain only noise and no systematic components (e. g. the correlogram of residuals should not reveal any serial dependencies). A good test of the model is (a) to plot the residuals and inspect them for any systematic trends, and (b) to examine the autocorrelogram of residuals (there should be no serial dependency between residuals). Analysis of residuals. The major concern here is that the residuals are systematically distributed across the series (e. g. they could be negative in the first part of the series and approach zero in the second part) or that they contain some serial dependency which may suggest that the ARIMA model is inadequate. The analysis of ARIMA residuals constitutes an important test of the model. The estimation procedure assumes that the residual are not (auto-) correlated and that they are normally distributed. Limitations. The ARIMA method is appropriate only for a time series that is stationary (i. e. its mean, variance, and autocorrelation should be approximately constant through time) and it is recommended that there are at least 50 observations in the input data. It is also assumed that the values of the estimated parameters are constant throughout the series. Interrupted Time Series ARIMA A common research questions in time series analysis is whether an outside event affected subsequent observations. For example, did the implementation of a new economic policy improve economic performance did a new anti-crime law affect subsequent crime rates and so on. In general, we would like to evaluate the impact of one or more discrete events on the values in the time series. This type of interrupted time series analysis is described in detail in McDowall, McCleary, Meidinger, Hay (1980). McDowall, et. Al. distinguish between three major types of impacts that are possible: (1) permanent abrupt, (2) permanent gradual, and (3) abrupt temporary. See also: Exponential Smoothing General Introduction Exponential smoothing has become very popular as a forecasting method for a wide variety of time series data. Historically, the method was independently developed by Brown and Holt. Brown worked for the US Navy during World War II, where his assignment was to design a tracking system for fire-control information to compute the location of submarines. Later, he applied this technique to the forecasting of demand for spare parts (an inventory control problem). He described those ideas in his 1959 book on inventory control. Holts research was sponsored by the Office of Naval Research independently, he developed exponential smoothing models for constant processes, processes with linear trends, and for seasonal data. Gardner (1985) proposed a unified classification of exponential smoothing methods. Excellent introductions can also be found in Makridakis, Wheelwright, and McGee (1983), Makridakis and Wheelwright (1989), Montgomery, Johnson, Gardiner (1990). Simple Exponential Smoothing A simple and pragmatic model for a time series would be to consider each observation as consisting of a constant ( b ) and an error component (epsilon), that is: X t b t . The constant b is relatively stable in each segment of the series, but may change slowly over time. If appropriate, then one way to isolate the true value of b . and thus the systematic or predictable part of the series, is to compute a kind of moving average, where the current and immediately preceding (younger) observations are assigned greater weight than the respective older observations. Simple exponential smoothing accomplishes exactly such weighting, where exponentially smaller weights are assigned to older observations. The specific formula for simple exponential smoothing is: When applied recursively to each successive observation in the series, each new smoothed value (forecast) is computed as the weighted average of the current observation and the previous smoothed observation the previous smoothed observation was computed in turn from the previous observed value and the smoothed value before the previous observation, and so on. Thus, in effect, each smoothed value is the weighted average of the previous observations, where the weights decrease exponentially depending on the value of parameter (alpha). If is equal to 1 (one) then the previous observations are ignored entirely if is equal to 0 (zero), then the current observation is ignored entirely, and the smoothed value consists entirely of the previous smoothed value (which in turn is computed from the smoothed observation before it, and so on thus all smoothed values will be equal to the initial smoothed value S 0 ). Values of in-between will produce intermediate results. Even though significant work has been done to study the theoretical properties of (simple and complex) exponential smoothing (e. g. see Gardner, 1985 Muth, 1960 see also McKenzie, 1984, 1985), the method has gained popularity mostly because of its usefulness as a forecasting tool. For example, empirical research by Makridakis et al . (1982, Makridakis, 1983), has shown simple exponential smoothing to be the best choice for one-period-ahead forecasting, from among 24 other time series methods and using a variety of accuracy measures (see also Gross and Craig, 1974, for additional empirical evidence). Thus, regardless of the theoretical model for the process underlying the observed time series, simple exponential smoothing will often produce quite accurate forecasts. Choosing the Best Value for Parameter (alpha) Gardner (1985) discusses various theoretical and empirical arguments for selecting an appropriate smoothing parameter. Obviously, looking at the formula presented above, should fall into the interval between 0 (zero) and 1 (although, see Brenner et al. . 1968, for an ARIMA perspective, implying 0 smaller than .30 is usually recommended. However, in the study by Makridakis et al . (1982), values above .30 frequently yielded the best forecasts. After reviewing the literature on this topic, Gardner (1985) concludes that it is best to estimate an optimum from the data (see below), rather than to guess and set an artificially low value. Estimating the best value from the data. In practice, the smoothing parameter is often chosen by a grid search of the parameter space that is, different solutions for are tried starting, for example, with 0.1 to 0.9, with increments of 0.1. Then is chosen so as to produce the smallest sums of squares (or mean squares) for the residuals (i. e. observed values minus one-step-ahead forecasts this mean squared error is also referred to as ex post mean squared error, ex post MSE for short). Indices of Lack of Fit (Error) The most straightforward way of evaluating the accuracy of the forecasts based on a particular value is to simply plot the observed values and the one-step-ahead forecasts. This plot can also include the residuals (scaled against the right Y - axis), so that regions of better or worst fit can also easily be identified. This visual check of the accuracy of forecasts is often the most powerful method for determining whether or not the current exponential smoothing model fits the data. In addition, besides the ex post MSE criterion (see previous paragraph), there are other statistical measures of error that can be used to determine the optimum parameter (see Makridakis, Wheelwright, and McGee, 1983): Mean error: The mean error (ME) value is simply computed as the average error value (average of observed minus one-step-ahead forecast). Obviously, a drawback of this measure is that positive and negative error values can cancel each other out, so this measure is not a very good indicator of overall fit. Mean absolute error: The mean absolute error (MAE) value is computed as the average absolute error value. If this value is 0 (zero), the fit (forecast) is perfect. As compared to the mean squared error value, this measure of fit will de-emphasize outliers, that is, unique or rare large error values will affect the MAE less than the MSE value. Sum of squared error (SSE), Mean squared error. These values are computed as the sum (or average) of the squared error values. This is the most commonly used lack-of-fit indicator in statistical fitting procedures. Percentage error (PE). All the above measures rely on the actual error value. It may seem reasonable to rather express the lack of fit in terms of the relative deviation of the one-step-ahead forecasts from the observed values, that is, relative to the magnitude of the observed values. For example, when trying to predict monthly sales that may fluctuate widely (e. g. seasonally) from month to month, we may be satisfied if our prediction hits the target with about 10 accuracy. In other words, the absolute errors may be not so much of interest as are the relative errors in the forecasts. To assess the relative error, various indices have been proposed (see Makridakis, Wheelwright, and McGee, 1983). The first one, the percentage error value, is computed as: where X t is the observed value at time t . and F t is the forecasts (smoothed values). Mean percentage error (MPE). This value is computed as the average of the PE values. Mean absolute percentage error (MAPE). As is the case with the mean error value (ME, see above), a mean percentage error near 0 (zero) can be produced by large positive and negative percentage errors that cancel each other out. Thus, a better measure of relative overall fit is the mean absolute percentage error. Also, this measure is usually more meaningful than the mean squared error. For example, knowing that the average forecast is off by 5 is a useful result in and of itself, whereas a mean squared error of 30.8 is not immediately interpretable. Automatic search for best parameter. A quasi-Newton function minimization procedure (the same as in ARIMA is used to minimize either the mean squared error, mean absolute error, or mean absolute percentage error. In most cases, this procedure is more efficient than the grid search (particularly when more than one parameter must be determined), and the optimum parameter can quickly be identified. The first smoothed value S 0 . A final issue that we have neglected up to this point is the problem of the initial value, or how to start the smoothing process. If you look back at the formula above, it is evident that you need an S 0 value in order to compute the smoothed value (forecast) for the first observation in the series. Depending on the choice of the parameter (i. e. when is close to zero), the initial value for the smoothing process can affect the quality of the forecasts for many observations. As with most other aspects of exponential smoothing it is recommended to choose the initial value that produces the best forecas Ts. On the other hand, in practice, when there are many leading observations prior to a crucial actual forecast, the initial value will not affect that forecast by much, since its effect will have long faded from the smoothed series (due to the exponentially decreasing weights, the older an observation the less it will influence the forecast). Seasonal and Non-Seasonal Models With or Without Trend The discussion above in the context of simple exponential smoothing introduced the basic procedure for identifying a smoothing parameter, and for evaluating the goodness-of-fit of a model. In addition to simple exponential smoothing, more complex models have been developed to accommodate time series with seasonal and trend components. The general idea here is that forecasts are not only computed from consecutive previous observations (as in simple exponential smoothing), but an independent (smoothed) trend and seasonal component can be added. Gardner (1985) discusses the different models in terms of seasonality (none, additive, or multiplicative) and trend (none, linear, exponential, or damped). Additive and multiplicative seasonality. Many time series data follow recurring seasonal patterns. For example, annual sales of toys will probably peak in the months of November and December, and perhaps during the summer (with a much smaller peak) when children are on their summer break. This pattern will likely repeat every year, however, the relative amount of increase in sales during December may slowly change from year to year. Thus, it may be useful to smooth the seasonal component independently with an extra parameter, usually denoted as ( delta ). Seasonal components can be additive in nature or multiplicative. For example, during the month of December the sales for a particular toy may increase by 1 million dollars every year. Thus, we could add to our forecasts for every December the amount of 1 million dollars (over the respective annual average) to account for this seasonal fluctuation. In this case, the seasonality is additive . Alternatively, during the month of December the sales for a particular toy may increase by 40, that is, increase by a factor of 1.4. Thus, when the sales for the toy are generally weak, than the absolute (dollar) increase in sales during December will be relatively weak (but the percentage will be constant) if the sales of the toy are strong, than the absolute (dollar) increase in sales will be proportionately greater. Again, in this case the sales increase by a certain factor . and the seasonal component is thus multiplicative in nature (i. e. the multiplicative seasonal component in this case would be 1.4). In plots of the series, the distinguishing characteristic between these two types of seasonal components is that in the additive case, the series shows steady seasonal fluctuations, regardless of the overall level of the series in the multiplicative case, the size of the seasonal fluctuations vary, depending on the overall level of the series. The seasonal smoothing parameter . In general the one-step-ahead forecasts are computed as (for no trend models, for linear and exponential trend models a trend component is added to the model see below): In this formula, S t stands for the (simple) exponentially smoothed value of the series at time t . and I t-p stands for the smoothed seasonal factor at time t minus p (the length of the season). Thus, compared to simple exponential smoothing, the forecast is enhanced by adding or multiplying the simple smoothed value by the predicted seasonal component. This seasonal component is derived analogous to the S t value from simple exponential smoothing as: Put into words, the predicted seasonal component at time t is computed as the respective seasonal component in the last seasonal cycle plus a portion of the error ( e t the observed minus the forecast value at time t ). Considering the formulas above, it is clear that parameter can assume values between 0 and 1. If it is zero, then the seasonal component for a particular point in time is predicted to be identical to the predicted seasonal component for the respective time during the previous seasonal cycle, which in turn is predicted to be identical to that from the previous cycle, and so on. Thus, if is zero, a constant unchanging seasonal component is used to generate the one-step-ahead forecasts. If the parameter is equal to 1, then the seasonal component is modified maximally at every step by the respective forecast error (times (1- ). which we will ignore for the purpose of this brief introduction). In most cases, when seasonality is present in the time series, the optimum parameter will fall somewhere between 0 (zero) and 1(one). Linear, exponential, and damped trend. To remain with the toy example above, the sales for a toy can show a linear upward trend (e. g. each year, sales increase by 1 million dollars), exponential growth (e. g. each year, sales increase by a factor of 1.3), or a damped trend (during the first year sales increase by 1 million dollars during the second year the increase is only 80 over the previous year, i. e. 800,000 during the next year it is again 80 less than the previous year, i. e. 800,000 .8 640,000 etc.). Each type of trend leaves a clear signature that can usually be identified in the series shown below in the brief discussion of the different models are icons that illustrate the general patterns. In general, the trend factor may change slowly over time, and, again, it may make sense to smooth the trend component with a separate parameter (denoted gamma for linear and exponential trend models, and phi for damped trend models). The trend smoothing parameters (linear and exponential trend) and (damped trend). Analogous to the seasonal component, when a trend component is included in the exponential smoothing process, an independent trend component is computed for each time, and modified as a function of the forecast error and the respective parameter. If the parameter is 0 (zero), than the trend component is constant across all values of the time series (and for all forecasts). If the parameter is 1, then the trend component is modified maximally from observation to observation by the respective forecast error. Parameter values that fall in-between represent mixtures of those two extremes. Parameter is a trend modification parameter, and affects how strongly changes in the trend will affect estimates of the trend for subsequent forecasts, that is, how quickly the trend will be damped or increased. Classical Seasonal Decomposition (Census Method 1) General Introduction Suppose you recorded the monthly passenger load on international flights for a period of 12 years ( see Box Jenkins, 1976). If you plot those data, it is apparent that (1) there appears to be a linear upwards trend in the passenger loads over the years, and (2) there is a recurring pattern or seasonality within each year (i. e. most travel occurs during the summer months, and a minor peak occurs during the December holidays). The purpose of the seasonal decomposition method is to isolate those components, that is, to de-compose the series into the trend effect, seasonal effects, and remaining variability. The classic technique designed to accomplish this decomposition is known as the Census I method. This technique is described and discussed in detail in Makridakis, Wheelwright, and McGee (1983), and Makridakis and Wheelwright (1989). General model. The general idea of seasonal decomposition is straightforward. In general, a time series like the one described above can be thought of as consisting of four different components: (1) A seasonal component (denoted as S t . where t stands for the particular point in time) (2) a trend component ( T t ), (3) a cyclical component ( C t ), and (4) a random, error, or irregular component ( I t ). The difference between a cyclical and a seasonal component is that the latter occurs at regular (seasonal) intervals, while cyclical factors have usually a longer duration that varies from cycle to cycle. In the Census I method, the trend and cyclical components are customarily combined into a trend-cycle component ( TC t ). The specific functional relationship between these components can assume different forms. However, two straightforward possibilities are that they combine in an additive or a multiplicative fashion: Here X t stands for the observed value of the time series at time t . Given some a priori knowledge about the cyclical factors affecting the series (e. g. business cycles), the estimates for the different components can be used to compute forecasts for future observations. (However, the Exponential smoothing method, which can also incorporate seasonality and trend components, is the preferred technique for forecasting purposes.) Additive and multiplicative seasonality . Lets consider the difference between an additive and multiplicative seasonal component in an example: The annual sales of toys will probably peak in the months of November and December, and perhaps during the summer (with a much smaller peak) when children are on their summer break. This seasonal pattern will likely repeat every year. Seasonal components can be additive or multiplicative in nature. For example, during the month of December the sales for a particular toy may increase by 3 million dollars every year. Thus, we could add to our forecasts for every December the amount of 3 million to account for this seasonal fluctuation. In this case, the seasonality is additive . Alternatively, during the month of December the sales for a particular toy may increase by 40, that is, increase by a factor of 1.4. Thus, when the sales for the toy are generally weak, then the absolute (dollar) increase in sales during December will be relatively weak (but the percentage will be constant) if the sales of the toy are strong, then the absolute (dollar) increase in sales will be proportionately greater. Again, in this case the sales increase by a certain factor . and the seasonal component is thus multiplicative in nature (i. e. the multiplicative seasonal component in this case would be 1.4). In plots of series, the distinguishing characteristic between these two types of seasonal components is that in the additive case, the series shows steady seasonal fluctuations, regardless of the overall level of the series in the multiplicative case, the size of the seasonal fluctuations vary, depending on the overall level of the series. Additive and multiplicative trend-cycle. We can extend the previous example to illustrate the additive and multiplicative trend-cycle components. In terms of our toy example, a fashion trend may produce a steady increase in sales (e. g. a trend towards more educational toys in general) as with the seasonal component, this trend may be additive (sales increase by 3 million dollars per year) or multiplicative (sales increase by 30, or by a factor of 1.3, annually) in nature. In addition, cyclical components may impact sales to reiterate, a cyclical component is different from a seasonal component in that it usually is of longer duration, and that it occurs at irregular intervals. For example, a particular toy may be particularly hot during a summer season (e. g. a particular doll which is tied to the release of a major childrens movie, and is promoted with extensive advertising). Again such a cyclical component can effect sales in an additive manner or multiplicative manner. Computations The Seasonal Decomposition (Census I) standard formulas are shown in Makridakis, Wheelwright, and McGee (1983), and Makridakis and Wheelwright (1989). Moving average. First a moving average is computed for the series, with the moving average window width equal to the length of one season. If the length of the season is even, then the user can choose to use either equal weights for the moving average or unequal weights can be used, where the first and last observation in the moving average window are averaged. Ratios or differences. In the moving average series, all seasonal (within-season) variability will be eliminated thus, the differences (in additive models) or ratios (in multiplicative models) of the observed and smoothed series will isolate the seasonal component (plus irregular component). Specifically, the moving average is subtracted from the observed series (for additive models) or the observed series is divided by the moving average values (for multiplicative models). Seasonal components. The seasonal component is then computed as the average (for additive models) or medial average (for multiplicative models) for each point in the season. (The medial average of a set of values is the mean after the smallest and largest values are excluded). The resulting values represent the (average) seasonal component of the series. Seasonally adjusted series. The original series can be adjusted by subtracting from it (additive models) or dividing it by (multiplicative models) the seasonal component. The resulting series is the seasonally adjusted series (i. e. the seasonal component will be removed). Trend-cycle component. Remember that the cyclical component is different from the seasonal component in that it is usually longer than one season, and different cycles can be of different lengths. The combined trend and cyclical component can be approximated by applying to the seasonally adjusted series a 5 point (centered) weighed moving average smoothing transformation with the weights of 1, 2, 3, 2, 1. Random or irregular component. Finally, the random or irregular (error) component can be isolated by subtracting from the seasonally adjusted series (additive models) or dividing the adjusted series by (multiplicative models) the trend-cycle component. X-11 Census Method II Seasonal Adjustment The general ideas of seasonal decomposition and adjustment are discussed in the context of the Census I seasonal adjustment method ( Seasonal Decomposition (Census I) ). The Census method II (2) is an extension and refinement of the simple adjustment method. Over the years, different versions of the Census method II evolved at the Census Bureau the method that has become most popular and is used most widely in government and business is the so-called X-11 variant of the Census method II (see Hiskin, Young, Musgrave, 1967). Subsequently, the term X-11 has become synonymous with this refined version of the Census method II. In addition to the documentation that can be obtained from the Census Bureau, a detailed summary of this method is also provided in Makridakis, Wheelwright, and McGee (1983) and Makridakis and Wheelwright (1989). For more information on this method, see the following topics: For more information on other Time Series methods, see Time Series Analysis - Index and the following topics: Seasonal Adjustment: Basic Ideas and Terms Suppose you recorded the monthly passenger load on international flights for a period of 12 years ( see Box Jenkins, 1976). If you plot those data, it is apparent that (1) there appears to be an upwards linear trend in the passenger loads over the years, and (2) there is a recurring pattern or seasonality within each year (i. e. most travel occurs during the summer months, and a minor peak occurs during the December holidays). The purpose of seasonal decomposition and adjustment is to isolate those components, that is, to de-compose the series into the trend effect, seasonal effects, and remaining variability. The classic technique designed to accomplish this decomposition was developed in the 1920s and is also known as the Census I method (see the Census I overview section). This technique is also described and discussed in detail in Makridakis, Wheelwright, and McGee (1983), and Makridakis and Wheelwright (1989). General model. The general idea of seasonal decomposition is straightforward. In general, a time series like the one described above can be thought of as consisting of four different components: (1) A seasonal component (denoted as S t . where t stands for the particular point in time) (2) a trend component ( T t ), (3) a cyclical component ( C t ), and (4) a random, error, or irregular component ( I t ). The difference between a cyclical and a seasonal component is that the latter occurs at regular (seasonal) intervals, while cyclical factors usually have a longer duration that varies from cycle to cycle. The trend and cyclical components are customarily combined into a trend-cycle component ( TC t ). The specific functional relationship between these components can assume different forms. However, two straightforward possibilities are that they combine in an additive or a multiplicative fashion: X t represents the observed value of the time series at time t . Given some a priori knowledge about the cyclical factors affecting the series (e. g. business cycles), the estimates for the different components can be used to compute forecasts for future observations. (However, the Exponential smoothing method, which can also incorporate seasonality and trend components, is the preferred technique for forecasting purposes.) Additive and multiplicative seasonality. Consider the difference between an additive and multiplicative seasonal component in an example: The annual sales of toys will probably peak in the months of November and December, and perhaps during the summer (with a much smaller peak) when children are on their summer break. This seasonal pattern will likely repeat every year. Seasonal components can be additive or multiplicative in nature. For example, during the month of December the sales for a particular toy may increase by 3 million dollars every year. Thus, you could add to your forecasts for every December the amount of 3 million to account for this seasonal fluctuation. In this case, the seasonality is additive . Alternatively, during the month of December the sales for a particular toy may increase by 40, that is, increase by a factor of 1.4. Thus, when the sales for the toy are generally weak, then the absolute (dollar) increase in sales during December will be relatively weak (but the percentage will be constant) if the sales of the toy are strong, then the absolute (dollar) increase in sales will be proportionately greater. Again, in this case the sales increase by a certain factor . and the seasonal component is thus multiplicative in nature (i. e. the multiplicative seasonal component in this case would be 1.4). In plots of series, the distinguishing characteristic between these two types of seasonal components is that in the additive case, the series shows steady seasonal fluctuations, regardless of the overall level of the series in the multiplicative case, the size of the seasonal fluctuations vary, depending on the overall level of the series. Additive and multiplicative trend-cycle. The previous example can be extended to illustrate the additive and multiplicative trend-cycle components. In terms of the toy example, a fashion trend may produce a steady increase in sales (e. g. a trend towards more educational toys in general) as with the seasonal component, this trend may be additive (sales increase by 3 million dollars per year) or multiplicative (sales increase by 30, or by a factor of 1.3, annually) in nature. In addition, cyclical components may impact sales. To reiterate, a cyclical component is different from a seasonal component in that it usually is of longer duration, and that it occurs at irregular intervals. For example, a particular toy may be particularly hot during a summer season (e. g. a particular doll which is tied to the release of a major childrens movie, and is promoted with extensive advertising). Again such a cyclical component can effect sales in an additive manner or multiplicative manner. The Census II Method The basic method for seasonal decomposition and adjustment outlined in the Basic Ideas and Terms topic can be refined in several ways. In fact, unlike many other time-series modeling techniques (e. g. ARIMA ) which are grounded in some theoretical model of an underlying process, the X-11 variant of the Census II method simply contains many ad hoc features and refinements, that over the years have proven to provide excellent estimates for many real-world applications (see Burman, 1979, Kendal Ord, 1990, Makridakis Wheelwright, 1989 Wallis, 1974). Some of the major refinements are listed below. Trading-day adjustment. Different months have different numbers of days, and different numbers of trading-days (i. e. Mondays, Tuesdays, etc.). When analyzing, for example, monthly revenue figures for an amusement park, the fluctuation in the different numbers of Saturdays and Sundays (peak days) in the different months will surely contribute significantly to the variability in monthly revenues. The X-11 variant of the Census II method allows the user to test whether such trading-day variability exists in the series, and, if so, to adjust the series accordingly. Extreme values. Most real-world time series contain outliers, that is, extreme fluctuations due to rare events. For example, a strike may affect production in a particular month of one year. Such extreme outliers may bias the estimates of the seasonal and trend components. The X-11 procedure includes provisions to deal with extreme values through the use of statistical control principles, that is, values that are above or below a certain range (expressed in terms of multiples of sigma . the standard deviation) can be modified or dropped before final estimates for the seasonality are computed. Multiple refinements. The refinement for outliers, extreme values, and different numbers of trading-days can be applied more than once, in order to obtain successively improved estimates of the components. The X-11 method applies a series of successive refinements of the estimates to arrive at the final trend-cycle, seasonal, and irregular components, and the seasonally adjusted series. Tests and summary statistics. In addition to estimating the major components of the series, various summary statistics can be computed. For example, analysis of variance tables can be prepared to test the significance of seasonal variability and trading-day variability (see above) in the series the X-11 procedure will also compute the percentage change from month to month in the random and trend-cycle components. As the duration or span in terms of months (or quarters for quarterly X-11 ) increases, the change in the trend-cycle component will likely also increase, while the change in the random component should remain about the same. The width of the average span at which the changes in the random component are about equal to the changes in the trend-cycle component is called the month (quarter) for cyclical dominance . or MCD (QCD) for short. For example, if the MCD is equal to 2, then you can infer that over a 2-month span the trend-cycle will dominate the fluctuations of the irregular (random) component. These and various other results are discussed in greater detail below. Result Tables Computed by the X-11 Method The computations performed by the X-11 procedure are best discussed in the context of the results tables that are reported. The adjustment process is divided into seven major steps, which are customarily labeled with consecutive letters A through G. Prior adjustment (monthly seasonal adjustment only). Before any seasonal adjustment is performed on the monthly time series, various prior user - defined adjustments can be incorporated. The user can specify a second series that contains prior adjustment factors the values in that series will either be subtracted (additive model) from the original series, or the original series will be divided by these values (multiplicative model). For multiplicative models, user-specified trading-day adjustment weights can also be specified. These weights will be used to adjust the monthly observations depending on the number of respective trading-days represented by the observation. Preliminary estimation of trading-day variation (monthly X-11) and weights. Next, preliminary trading-day adjustment factors (monthly X-11 only) and weights for reducing the effect of extreme observations are computed. Final estimation of trading-day variation and irregular weights (monthly X - 11 ). The adjustments and weights computed in B above are then used to derive improved trend-cycle and seasonal estimates. These improved estimates are used to compute the final trading-day factors (monthly X-11 only) and weights. Final estimation of seasonal factors, trend-cycle, irregular, and seasonally adjusted series. The final trading-day factors and weights computed in C above are used to compute the final estimates of the components. Modified original, seasonally adjusted, and irregular series. The original and final seasonally adjusted series, and the irregular component are modified for extremes. The resulting modified series allow the user to examine the stability of the seasonal adjustment. Month (quarter) for cyclical dominance (MCD, QCD), moving average, and summary measures. In this part of the computations, various summary measures (see below) are computed to allow the user to examine the relative importance of the different components, the average fluctuation from month-to-month (quarter-to-quarter), the average number of consecutive changes in the same direction (average number of runs), etc. Charts. Finally, you will compute various charts (graphs) to summarize the results. For example, the final seasonally adjusted series will be plotted, in chronological order, or by month (see below). Specific Description of all Result Tables Computed by the X-11 Method In each part A through G of the analysis (see Results Tables Computed by the X-11 Method ), different result tables are computed. Customarily, these tables are numbered, and also identified by a letter to indicate the respective part of the analysis. For example, table B 11 shows the initial seasonally adjusted series C 11 is the refined seasonally adjusted series, and D 11 is the final seasonally adjusted series. Shown below is a list of all available tables. Those tables identified by an asterisk () are not available (applicable) when analyzing quarterly series. (Also, for quarterly adjustment, some of the computations outlined below are slightly different for example instead of a 12-term monthly moving average, a 4-term quarterly moving average is applied to compute the seasonal factors the initial trend-cycle estimate is computed via a centered 4-term moving average, the final trend-cycle estimate in each part is computed by a 5-term Henderson average.) Following the convention of the Bureau of the Census version of the X-11 method, three levels of printout detail are offered: Standard (17 to 27 tables), Long (27 to 39 tables), and Full (44 to 59 tables). In the description of each table below, the letters S, L, and F are used next to each title to indicate, which tables will be displayed andor printed at the respective setting of the output option. (For the charts, two levels of detail are available: Standard and All .) See the table name below, to obtain more information about that table. A 2. Prior Monthly Adjustment (S) Factors Tables B 14 through B 16, B18, and B19: Adjustment for trading-day variation. These tables are only available when analyzing monthly series. Different months contain different numbers of days of the week (i. e. Mondays, Tuesdays, etc.). In some series, the variation in the different numbers of trading-days may contribute significantly to monthly fluctuations (e. g. the monthly revenues of an amusement park will be greatly influenced by the number of SaturdaysSundays in each month). The user can specify initial weights for each trading-day (see A 4 ), andor these weights can be estimated from the data (the user can also choose to apply those weights conditionally, i. e. only if they explain a significant proportion of variance). B 14. Extreme Irregular Values Excluded from Trading-day Regression (L) B 15. Preliminary Trading-day Regression (L) B 16. Trading-day Adjustment Factors Derived from Regression Coefficients (F) B 17. Preliminary Weights for Irregular Component (L) B 18. Trading-day Factors Derived from Combined Daily Weights (F) B 19. Original Series Adjusted for Trading-day and Prior Variation (F) C 1. Original Series Modified by Preliminary Weights and Adjusted for Trading-day and Prior Variation (L) Tables C 14 through C 16, C 18, and C 19: Adjustment for trading-day variation. These tables are only available when analyzing monthly series, and when adjustment for trading-day variation is requested. In that case, the trading-day adjustment factors are computed from the refined adjusted series, analogous to the adjustment performed in part B ( B 14 through B 16, B 18 and B 19 ). C 14. Extreme Irregular Values Excluded from Trading-day Regression (S) C 15. Final Trading-day Regression (S) C 16. Final Trading-day Adjustment Factors Derived from Regression X11 output: Coefficients (S) C 17. Final Weights for Irregular Component (S) C 18. Final Trading-day Factors Derived From Combined Daily Weights (S) C 19. Original Series Adjusted for Trading-day and Prior Variation (S) D 1. Original Series Modified by Final Weights and Adjusted for Trading-day and Prior Variation (L) Distributed Lags Analysis For more information on other Time Series methods, see Time Series Analysis - Index and the following topics: General Purpose Distributed lags analysis is a specialized technique for examining the relationships between variables that involve some delay. For example, suppose that you are a manufacturer of computer software, and you want to determine the relationship between the number of inquiries that are received, and the number of orders that are placed by your customers. You could record those numbers monthly for a one-year period, and then correlate the two variables. However, obviously inquiries will precede actual orders, and you can expect that the number of orders will follow the number of inquiries with some delay. Put another way, there will be a (time) lagged correlation between the number of inquiries and the number of orders that are received. Time-lagged correlations are particularly common in econometrics. For example, the benefits of investments in new machinery usually only become evident after some time. Higher income will change peoples choice of rental apartments, however, this relationship will be lagged because it will take some time for people to terminate their current leases, find new apartments, and move. In general, the relationship between capital appropriations and capital expenditures will be lagged, because it will require some time before investment decisions are actually acted upon. In all of these cases, we have an independent or explanatory variable that affects the dependent variables with some lag. The distributed lags method allows you to investigate those lags. Detailed discussions of distributed lags correlation can be found in most econometrics textbooks, for example, in Judge, Griffith, Hill, Luetkepohl, and Lee (1985), Maddala (1977), and Fomby, Hill, and Johnson (1984). In the following paragraphs we will present a brief description of these methods. We will assume that you are familiar with the concept of correlation (see Basic Statistics ), and the basic ideas of multiple regression (see Multiple Regression ). General Model Suppose we have a dependent variable y and an independent or explanatory variable x which are both measured repeatedly over time. In some textbooks, the dependent variable is also referred to as the endogenous variable, and the independent or explanatory variable the exogenous variable. The simplest way to describe the relationship between the two would be in a simple linear relationship: In this equation, the value of the dependent variable at time t is expressed as a linear function of x measured at times t. t-1. t-2 , etc. Thus, the dependent variable is a linear function of x. and x is lagged by 1, 2, etc. time periods. The beta weights ( i ) can be considered slope parameters in this equation. You may recognize this equation as a special case of the general linear regression equation (see the Multiple Regression overview). If the weights for the lagged time periods are statistically significant, we can conclude that the y variable is predicted (or explained) with the respective lag. Almon Distributed Lag A common problem that often arises when computing the weights for the multiple linear regression model shown above is that the values of adjacent (in time) values in the x variable are highly correlated. In extreme cases, their independent contributions to the prediction of y may become so redundant that the correlation matrix of measures can no longer be inverted, and thus, the beta weights cannot be computed. In less extreme cases, the computation of the beta weights and their standard errors can become very imprecise, due to round-off error. In the context of Multiple Regression this general computational problem is discussed as the multicollinearity or matrix ill-conditioning issue. Almon (1965) proposed a procedure that will reduce the multicollinearity in this case. Specifically, suppose we express each weight in the linear regression equation in the following manner: Almon could show that in many cases it is easier (i. e. it avoids the multicollinearity problem) to estimate the alpha values than the beta weights directly. Note that with this method, the precision of the beta weight estimates is dependent on the degree or order of the polynomial approximation . Misspecifications. A general problem with this technique is that, of course, the lag length and correct polynomial degree are not known a priori . The effects of misspecifications of these parameters are potentially serious (in terms of biased estimation). This issue is discussed in greater detail in Frost (1975), Schmidt and Waud (1973), Schmidt and Sickles (1975), and Trivedi and Pagan (1979). Single Spectrum (Fourier) Analysis Spectrum analysis is concerned with the exploration of cyclical patterns of data. The purpose of the analysis is to decompose a complex time series with cyclical components into a few underlying sinusoidal (sine and cosine) functions of particular wavelengths. The term spectrum provides an appropriate metaphor for the nature of this analysis: Suppose you study a beam of white sun light, which at first looks like a random (white noise) accumulation of light of different wavelengths. However, when put through a prism, we can separate the different wave lengths or cyclical components that make up white sun light. In fact, via this technique we can now identify and distinguish between different sources of light. Thus, by identifying the important underlying cyclical components, we have learned something about the phenomenon of interest. In essence, performing spectrum analysis on a time series is like putting the series through a prism in order to identify the wave lengths and importance of underlying cyclical components. As a result of a successful analysis, you might uncover just a few recurring cycles of different lengths in the time series of interest, which at first looked more or less like random noise. A much cited example for spectrum analysis is the cyclical nature of sun spot activity (e. g. see Bloomfield, 1976, or Shumway, 1988). It turns out that sun spot activity varies over 11 year cycles. Other examples of celestial phenomena, weather patterns, fluctuations in commodity prices, economic activity, etc. are also often used in the literature to demonstrate this technique. To contrast this technique with ARIMA or Exponential Smoothing. the purpose of spectrum analysis is to identify the seasonal fluctuations of different lengths, while in the former types of analysis, the length of the seasonal component is usually known (or guessed) a priori and then included in some theoretical model of moving averages or autocorrelations. The classic text on spectrum analysis is Bloomfield (1976) however, other detailed discussions can be found in Jenkins and Watts (1968), Brillinger (1975), Brigham (1974), Elliott and Rao (1982), Priestley (1981), Shumway (1988), or Wei (1989). For more information, see Time Series Analysis - Index and the following topics: Cross-Spectrum Analysis For more information, see Time Series Analysis - Index and the following topics: General Introduction Cross-spectrum analysis is an extension of Single Spectrum (Fourier) Analysis to the simultaneous analysis of two series. In the following paragraphs, we will assume that you have already read the introduction to single spectrum analysis. Detailed discussions of this technique can be found in Bloomfield (1976), Jenkins and Watts (1968), Brillinger (1975), Brigham (1974), Elliott and Rao (1982), Priestley (1981), Shumway (1988), or Wei (1989). Strong periodicity in the series at the respective frequency. A much cited example for spectrum analysis is the cyclical nature of sun spot activity (e. g. see Bloomfield, 1976, or Shumway, 1988). It turns out that sun spot activity varies over 11 year cycles. Other examples of celestial phenomena, weather patterns, fluctuations in commodity prices, economic activity, etc. are also often used in the literature to demonstrate this technique. The purpose of cross-spectrum analysis is to uncover the correlations between two series at different frequencies. For example, sun spot activity may be related to weather phenomena here on earth. If so, then if we were to record those phenomena (e. g. yearly average temperature) and submit the resulting series to a cross-spectrum analysis together with the sun spot data, we may find that the weather indeed correlates with the sunspot activity at the 11 year cycle. That is, we may find a periodicity in the weather data that is in-sync with the sun spot cycles. We can easily think of other areas of research where such knowledge could be very useful for example, various economic indicators may show similar (correlated) cyclical behavior various physiological measures likely will also display coordinated (i. e. correlated) cyclical behavior, and so on. Basic Notation and Principles A simple example Consider the following two series with 16 cases: Results for Each Variable The complete summary contains all spectrum statistics computed for each variable, as described in the Single Spectrum (Fourier) Analysis overview section. Looking at the results shown above, it is clear that both variables show strong periodicities at the frequencies .0625 and .1875. Cross-Periodogram, Cross-Density, Quadrature-Density, Cross-Amplitude Analogous to the results for the single variables, the complete summary will also display periodogram values for the cross periodogram. However, the cross-spectrum consists of complex numbers that can be divided into a real and an imaginary part. These can be smoothed to obtain the cross-density and quadrature density (quad density for short) estimates, respectively. (The reasons for smoothing, and the different common weight functions for smoothing are discussed in the Single Spectrum (Fourier) Analysis .) The square root of the sum of the squared cross-density and quad-density values is called the cross - amplitude . The cross-amplitude can be interpreted as a measure of covariance between the respective frequency components in the two series. Thus we can conclude from the results shown in the table above that the .0625 and .1875 frequency components in the two series covary. Squared Coherency, Gain, and Phase Shift There are additional statistics that can be displayed in the complete summary. Squared coherency. You can standardize the cross-amplitude values by squaring them and dividing by the product of the spectrum density estimates for each series. The result is called the squared coherency . which can be interpreted similar to the squared correlation coefficient (see Correlations - Overview ), that is, the coherency value is the squared correlation between the cyclical components in the two series at the respective frequency. However, the coherency values should not be interpreted by themselves for example, when the spectral density estimates in both series are very small, large coherency values may result (the divisor in the computation of the coherency values will be very small), even though there are no strong cyclical components in either series at the respective frequencies. Gain. The gain value is computed by dividing the cross-amplitude value by the spectrum density estimates for one of the two series in the analysis. Consequently, two gain values are computed, which can be interpreted as the standard least squares regression coefficients for the respective frequencies. Phase shift. Finally, the phase shift estimates are computed as tan-1 of the ratio of the quad density estimates over the cross-density estimate. The phase shift estimates (usually denoted by the Greek letter ) are measures of the extent to which each frequency component of one series leads the other. How the Example Data were Created Now, lets return to the example data set presented above. The large spectral density estimates for both series, and the cross-amplitude values at frequencies 0.0625 and .1875 suggest two strong synchronized periodicities in both series at those frequencies. In fact, the two series were created as: v1 cos(2 .0625(v0-1)) .75sin(2 .2(v0-1)) v2 cos(2 .0625(v02)) .75sin(2 .2(v02)) (where v0 is the case number). Indeed, the analysis presented in this overview reproduced the periodicity inserted into the data very well. Spectrum Analysis - Basic Notation and Principles For more information, see Time Series Analysis - Index and the following topics: Frequency and Period The wave length of a sine or cosine function is typically expressed in terms of the number of cycles per unit time ( Frequency ), often denoted by the Greek letter nu ( some textbooks also use f ). For example, the number of letters handled in a post office may show 12 cycles per year: On the first of every month a large amount of mail is sent (many bills come due on the first of the month), then the amount of mail decreases in the middle of the month, then it increases again towards the end of the month. Therefore, every month the fluctuation in the amount of mail handled by the post office will go through a full cycle. Thus, if the unit of analysis is one year, then n would be equal to 12, as there would be 12 cycles per year. Of course, there will likely be other cycles with different frequencies. For example, there might be annual cycles ( 1), and perhaps weekly cycles 52 weeks per year). The period T of a sine or cosine function is defined as the length of time required for one full cycle. Thus, it is the reciprocal of the frequency, or: T 1 . To return to the mail example in the previous paragraph, the monthly cycle, expressed in yearly terms, would be equal to 112 0.0833. Put into words, there is a period in the series of length 0.0833 years. The General Structural Model As mentioned before, the purpose of spectrum analysis is to decompose the original series into underlying sine and cosine functions of different frequencies, in order to determine those that appear particularly strong or important. One way to do so would be to cast the issue as a linear Multiple Regression problem, where the dependent variable is the observed time series, and the independent variables are the sine functions of all possible (discrete) frequencies. Such a linear multiple regression model can be written as: Following the common notation from classical harmonic analysis, in this equation (lambda) is the frequency expressed in terms of radians per unit time, that is: 2 k . where is the constant pi 3.14. and k kq. What is important here is to recognize that the computational problem of fitting sine and cosine functions of different lengths to the data can be considered in terms of multiple linear regression. Note that the cosine parameters a k and sine parameters b k are regression coefficients that tell us the degree to which the respective functions are correlated with the data. Overall there are q different sine and cosine functions intuitively (as also discussed in Multiple Regression ), it should be clear that we cannot have more sine and cosine functions than there are data points in the series. Without going into detail, if there are N data points in the series, then there will be N21 cosine functions and N2-1 sine functions. In other words, there will be as many different sinusoidal waves as there are data points, and we will be able to completely reproduce the series from the underlying functions. (Note that if the number of cases in the series is odd, then the last data point will usually be ignored in order for a sinusoidal function to be identified, you need at least two points: the high peak and the low peak.) To summarize, spectrum analysis will identify the correlation of sine and cosine functions of different frequency with the observed data. If a large correlation (sine or cosine coefficient) is identified, you can conclude that there is a strong periodicity of the respective frequency (or period) in the data. Complex numbers (real and imaginary numbers). In many textbooks on spectrum analysis, the structural model shown above is presented in terms of complex numbers, that is, the parameter estimation process is described in terms of the Fourier transform of a series into real and imaginary parts. Complex numbers are the superset that includes all real and imaginary numbers. Imaginary numbers, by definition, are numbers that are multiplied by the constant i . where i is defined as the square root of -1. Obviously, the square root of -1 does not exist, hence the term imaginary number however, meaningful arithmetic operations on imaginary numbers can still be performed (e. g. i22 -4). It is useful to think of real and imaginary numbers as forming a two dimensional plane, where the horizontal or X - axis represents all real numbers, and the vertical or Y - axis represents all imaginary numbers. Complex numbers can then be represented as points in the two - dimensional plane. For example, the complex number 3i2 can be represented by a point with coordinates in this plane. You can also think of complex numbers as angles, for example, you can connect the point representing a complex number in the plane with the origin (complex number 0i0), and measure the angle of that vector to the horizontal line. Thus, intuitively you can see how the spectrum decomposition formula shown above, consisting of sine and cosine functions, can be rewritten in terms of operations on complex numbers. In fact, in this manner the mathematical discussion and required computations are often more elegant and easier to perform which is why many textbooks prefer the presentation of spectrum analysis in terms of complex numbers. A Simple Example Shumway (1988) presents a simple example to clarify the underlying mechanics of spectrum analysis. Lets create a series with 16 cases following the equation shown above, and then see how we may extract the information that was put in it. First, create a variable and define it as: x 1cos(2 .0625(v0-1)) .75sin(2 .2(v0-1)) This variable is made up of two underlying periodicities: The first at the frequency of .0625 (or period 1 16 one observation completes 116th of a full cycle, and a full cycle is completed every 16 observations) and the second at the frequency of .2 (or period of 5). The cosine coefficient (1.0) is larger than the sine coefficient (.75). The spectrum analysis summary is shown below. Lets now review the columns. Clearly, the largest cosine coefficient can be found for the .0625 frequency. A smaller sine coefficient can be found at frequency .1875. Thus, clearly the two sinecosine frequencies which were inserted into the example data file are reflected in the above table. Periodogram The sine and cosine functions are mutually independent (or orthogonal) thus we may sum the squared coefficients for each frequency to obtain the periodogram . Specifically, the periodogram values above are computed as: P k sine coefficient k 2 cosine coefficient k 2 N2 where P k is the periodogram value at frequency k and N is the overall length of the series. The periodogram values can be interpreted in terms of variance (sums of squares) of the data at the respective frequency or period. Customarily, the periodogram values are plotted against the frequencies or periods. The Problem of Leakage In the example above, a sine function with a frequency of 0.2 was inserted into the series. However, because of the length of the series (16), none of the frequencies reported exactly hits on that frequency. In practice, what often happens in those cases is that the respective frequency will leak into adjacent frequencies. For example, you may find large periodogram values for two adjacent frequencies, when, in fact, there is only one strong underlying sine or cosine function at a frequency that falls in-between those implied by the length of the series. There are three ways in which we can approach the problem of leakage: By padding the series, we may apply a finer frequency roster to the data, By tapering the series prior to the analysis, we may reduce leakage, or By smoothing the periodogram, we may identify the general frequency regions or ( spectral densities ) that significantly contribute to the cyclical behavior of the series. See below for descriptions of each of these approaches. Padding the Time Series Because the frequency values are computed as Nt (the number of units of times), we can simply pad the series with a constant (e. g. zeros) and thereby introduce smaller increments in the frequency values. In a sense, padding allows us to apply a finer roster to the data. In fact, if we padded the example data file described in the example above with ten zeros, the results would not change, that is, the largest periodogram peaks would still occur at the frequency values closest to .0625 and .2. (Padding is also often desirable for computational efficiency reasons see below.) The so-called process of split-cosine-bell tapering is a recommended transformation of the series prior to the spectrum analysis. It usually leads to a reduction of leakage in the periodogram. The rationale for this transformation is explained in detail in Bloomfield (1976, p. 80-94). In essence, a proportion ( p ) of the data at the beginning and at the end of the series is transformed via multiplication by the weights: where m is chosen so that 2 mN is equal to the proportion of data to be tapered ( p ). Data Windows and Spectral Density Estimates In practice, when analyzing actual data, it is usually not of crucial importance to identify exactly the frequencies for particular underlying sine or cosine functions. Rather, because the periodogram values are subject to substantial random fluctuation, we are faced with the problem of very many chaotic periodogram spikes. In that case, we want to find the frequencies with the greatest spectral densities . that is, the frequency regions, consisting of many adjacent frequencies, that contribute most to the overall periodic behavior of the series. This can be accomplished by smoothing the periodogram values via a weighted moving average transformation. Suppose the moving average window is of width m (which must be an odd number) the following are the most commonly used smoothers (note: p (m-1)2 ). Daniell (or equal weight) window. The Daniell window (Daniell 1946) amounts to a simple (equal weight) moving average transformation of the periodogram values, that is, each spectral density estimate is computed as the mean of the m2 preceding and subsequent periodogram values. Tukey window. In the Tukey (Blackman and Tukey, 1958) or Tukey-Hanning window (named after Julius Von Hann), for each frequency, the weights for the weighted moving average of the periodogram values are computed as: Hamming window. In the Hamming (named after R. W. Hamming) window or Tukey-Hamming window (Blackman and Tukey, 1958), for each frequency, the weights for the weighted moving average of the periodogram values are computed as: Parzen window. In the Parzen window (Parzen, 1961), for each frequency, the weights for the weighted moving average of the periodogram values are computed as: Bartlett window. In the Bartlett window (Bartlett, 1950) the weights are computed as: With the exception of the Daniell window, all weight functions will assign the greatest weight to the observation being smoothed in the center of the window, and increasingly smaller weights to values that are further away from the center. In many cases, all of these data windows will produce very similar results. Preparing the Data for Analysis Lets now consider a few other practical points in spectrum analysis. Usually, we want to subtract the mean from the series, and detrend the series (so that it is stationary ) prior to the analysis. Otherwise the periodogram and density spectrum will mostly be overwhelmed by a very large value for the first cosine coefficient (for frequency 0.0). In a sense, the mean is a cycle of frequency 0 (zero) per unit time that is, it is a constant. Similarly, a trend is also of little interest when we want to uncover the periodicities in the series. In fact, both of those potentially strong effects may mask the more interesting periodicities in the data, and thus both the mean and the trend (linear) should be removed from the series prior to the analysis. Sometimes, it is also useful to smooth the data prior to the analysis, in order to tame the random noise that may obscure meaningful periodic cycles in the periodogram. Results when No Periodicity in the Series Exists Finally, what if there are no recurring cycles in the data, that is, if each observation is completely independent of all other observations If the distribution of the observations follows the normal distribution, such a time series is also referred to as a white noise series (like the white noise you hear on the radio when tuned in-between stations). A white noise input series will result in periodogram values that follow an exponential distribution. Thus, by testing the distribution of periodogram values against the exponential distribution, you can test whether the input series is different from a white noise series. In addition, then you can also request to compute the Kolmogorov-Smirnov one-sample d statistic (see also Nonparametrics and Distributions for more details). Testing for white noise in certain frequency bands. Note that you can also plot the periodogram values for a particular frequency range only. Again, if the input is a white noise series with respect to those frequencies (i. e. it there are no significant periodic cycles of those frequencies), then the distribution of the periodogram values should again follow an exponential distribution . Fast Fourier Transforms (FFT) For more information, see Time Series Analysis - Index and the following topics: General Introduction The interpretation of the results of spectrum analysis is discussed in the Basic Notation and Principles topic, however, we have not described how it is done computationally. Up until the mid-1960s the standard way of performing the spectrum decomposition was to use explicit formulae to solve for the sine and cosine parameters. The computations involved required at least N2 (complex) multiplications. Thus, even with todays high-speed computers. it would be very time consuming to analyze even small time series (e. g. 8,000 observations would result in at least 64 million multiplications). The time requirements changed drastically with the development of the so-called fast Fourier transform algorithm. or FFT for short. In the mid-1960s, J. W. Cooley and J. W. Tukey (1965) popularized this algorithm which, in retrospect, had in fact been discovered independently by various individuals. Various refinements and improvements of this algorithm can be found in Monro (1975) and Monro and Branch (1976). Readers interested in the computational details of this algorithm may refer to any of the texts cited in the overview. Suffice it to say that via the FFT algorithm, the time to perform a spectral analysis is proportional to N log2( N ) - a huge improvement. However, a draw-back of the standard FFT algorithm is that the number of cases in the series must be equal to a power of 2 (i. e. 16, 64, 128, 256. ). Usually, this necessitated padding of the series, which, as described above, will in most cases not change the characteristic peaks of the periodogram or the spectral density estimates. In cases, however, where the time units are meaningful, such padding may make the interpretation of results more cumbersome. Computation of FFT in Time Series The implementation of the FFT algorithm allows you to take full advantage of the savings afforded by this algorithm. On most standard computers, series with over 100,000 cases can easily be analyzed. However, there are a few things to remember when analyzing series of that size. As mentioned above, the standard (and most efficient) FFT algorithm requires that the length of the input series is equal to a power of 2. If this is not the case, additional computations have to be performed. It will use the simple explicit computational formulas as long as the input series is relatively small, and the number of computations can be performed in a relatively short amount of time. For long time series, in order to still utilize the FFT algorithm, an implementation of the general approach described by Monro and Branch (1976) is used. This method requires significantly more storage space, however, series of considerable length can still be analyzed very quickly, even if the number of observations is not equal to a power of 2. For time series of lengths not equal to a power of 2, we would like to make the following recommendations: If the input series is small to moderately sized (e. g. only a few thousand cases), then do not worry. The analysis will typically only take a few seconds anyway. In order to analyze moderately large and large series (e. g. over 100,000 cases), pad the series to a power of 2 and then taper the series during the exploratory part of your data analysis. Was this topic helpfulmoving average Mean of time series data (observations equally spaced in time) from several consecutive periods. Chamado de movimento porque é continuamente recalculado à medida que novos dados se tornam disponíveis, ele progride caindo o valor mais antigo e adicionando o valor mais recente. Por exemplo, a média móvel das vendas de seis meses pode ser calculada tomando a média das vendas de janeiro a junho, depois a média das vendas de fevereiro a julho, depois de março a agosto, e assim por diante. As médias móveis (1) reduzem o efeito de variações temporárias nos dados, (2) melhoram o ajuste dos dados para uma linha (um processo chamado suavização) para mostrar a tendência dos dados mais claramente e (3) realçam qualquer valor acima ou abaixo do valor tendência. Se você está calculando algo com variação muito alta o melhor que você pode ser capaz de fazer é descobrir a média móvel. Eu queria saber qual era a média móvel dos dados, então eu teria uma melhor compreensão de como estávamos fazendo. Quando você está tentando descobrir alguns números que mudam muitas vezes o melhor que você pode fazer é calcular a média móvel.

No comments:

Post a Comment