Python实现的大数据分析操作系统日志功能示例

我从窗户里探头往外看,嘿!春天果然到来了。看,外面嫩绿的小草像动画片里的那样,慢慢探出头来。再看,那平坦的草地里,星星点点的眨着眼睛的是什么?哦!那是可爱的小花,还有小虫在花瓣里钻来钻去呢?嘻,原来是童话故事里睡在花瓣里的拇指姑娘啊!再看看,那干枯已久的柳树也伸出了嫩绿的手,轻轻地走来了美丽的春姑娘!

本文实例讲述了Python实现的大数据分析操作系统日志功能。分享给大家供大家参考,具体如下:

一 代码

1、大文件切分

import os
import os.path
import time
def FileSplit(sourceFile, targetFolder):
  if not os.path.isfile(sourceFile):
    print(sourceFile, ' does not exist.')
    return
  if not os.path.isdir(targetFolder):
    os.mkdir(targetFolder)
  tempData = []
  number = 1000
  fileNum = 1
  linesRead = 0
  with open(sourceFile, 'r') as srcFile:
    dataLine = srcFile.readline().strip()
    while dataLine:
      for i in range(number):
        tempData.append(dataLine)
        dataLine = srcFile.readline()
        if not dataLine:
          break
      desFile = os.path.join(targetFolder, sourceFile[0:-4] + str(fileNum) + '.txt')
      with open(desFile, 'a+') as f:
        f.writelines(tempData)
      tempData = []
      fileNum = fileNum + 1
if __name__ == '__main__':
  #sourceFile = input('Input the source file to split:')
  #targetFolder = input('Input the target folder you want to place the split files:')
  sourceFile = 'test.txt'
  targetFolder = 'test'
  FileSplit(sourceFile, targetFolder)

2、Mapper代码

import os
import re
import threading
import time
def Map(sourceFile):
  if not os.path.exists(sourceFile):
    print(sourceFile, ' does not exist.')
    return
  pattern = re.compile(r'[0-9]{1,2}/[0-9]{1,2}/[0-9]{4}')
  result = {}
  with open(sourceFile, 'r') as srcFile:
    for dataLine in srcFile:
      r = pattern.findall(dataLine)
      if r:
        t = result.get(r[0], 0)
        t += 1
        result[r[0]] = t
  desFile = sourceFile[0:-4] + '_map.txt'
  with open(desFile, 'a+') as fp:
    for k, v in result.items():
      fp.write(k + ':' + str(v) + '\n')
if __name__ == '__main__':
  desFolder = 'test'
  files = os.listdir(desFolder)
  #如果不使用多线程,可以直接这样写
  '''for f in files:
    Map(desFolder + '\\' + f)'''
  #使用多线程
  def Main(i):
    Map(desFolder + '\\' + files[i])
  fileNumber = len(files)
  for i in range(fileNumber):
    t = threading.Thread(target = Main, args =(i,))
    t.start()

3.Reducer代码

import os
def Reduce(sourceFolder, targetFile):
  if not os.path.isdir(sourceFolder):
    print(sourceFolder, ' does not exist.')
    return
  result = {}
  #Deal only with the mapped files
  allFiles = [sourceFolder+'\\'+f for f in os.listdir(sourceFolder) if f.endswith('_map.txt')]
  for f in allFiles:
    with open(f, 'r') as fp:
      for line in fp:
        line = line.strip()
        if not line:
          continue
        position = line.index(':')
        key = line[0:position]
        value = int(line[position + 1:])
        result[key] = result.get(key,0) + value
  with open(targetFile, 'w') as fp:
    for k,v in result.items():
      fp.write(k + ':' + str(v) + '\n')
if __name__ == '__main__':
  Reduce('test', 'test\\result.txt')

二 运行结果

依次运行上面3个程序,得到最终结果:

07/10/2013:4634
07/16/2013:51
08/15/2013:3958
07/11/2013:1
10/09/2013:733
12/11/2013:564
02/12/2014:4102
05/14/2014:737

希望本文所述对大家Python程序设计有所帮助。

以上就是Python实现的大数据分析操作系统日志功能示例。曾经赤脚跋涉过的人,才知珍惜鞋子。经过困境,才知珍惜平淡的生活。更多关于Python实现的大数据分析操作系统日志功能示例请关注haodaima.com其它相关文章!

您可能有感兴趣的文章
Python自动化运维-使用Python脚本监控华为AR路由器关键路由变化

Python自动化运维-netmiko模块设备自动发现

Python自动化运维—netmiko模块连接并配置华为交换机

Python自动化运维-利用Python-netmiko模块备份设备配置

Python自动化运维-Paramiko模块和堡垒机实战