Python求解回归问题

灰太狼 2022-09-08 00:10 263阅读 0赞

title: Python求解回归问题
cover: https://gitee.com/Asimok/picgo/raw/master/img/MacBookPro/20210801104603.png
categories: 机器学习
tags:

  • Python
  • 机器学习
    keywords: ‘机器学习,Python’
    date: 2021-8-1

Python求解回归问题

y=wx+b

  1. import numpy as np
  2. import matplotlib.pyplot as plt
  3. # 计算给定(w,b)的平均误差
  4. def compute_error_for_line_given_points(b, w, points):
  5. totalError = 0
  6. for i in range(0, len(points)):
  7. x = points[i, 0]
  8. y = points[i, 1]
  9. # computer mean-squared-error
  10. totalError += (y - (w * x + b)) ** 2
  11. # average loss for each point
  12. return totalError / float(len(points))
  13. def step_gradient(b_current, w_current, points, learningRate):
  14. b_gradient = 0
  15. w_gradient = 0
  16. N = float(len(points))
  17. for i in range(0, len(points)):
  18. x = points[i, 0]
  19. y = points[i, 1]
  20. # 求导数 除N取平均值
  21. # grad_b = 2(wx+b-y)
  22. b_gradient += (2/N) * ((w_current * x + b_current) - y)
  23. # grad_w = 2(wx+b-y)*x
  24. w_gradient += (2/N) * x * ((w_current * x + b_current) - y)
  25. # update b' w'
  26. # 梯度指向极大值方向 因此反方向更新梯度
  27. new_b = b_current - (learningRate * b_gradient)
  28. new_w = w_current - (learningRate * w_gradient)
  29. temploss = compute_error_for_line_given_points(new_b, new_w, points)
  30. loss.append(temploss)
  31. return [new_b, new_w]
  32. def gradient_descent_runner(points, starting_b, starting_w, learning_rate, num_iterations):
  33. b = starting_b
  34. w = starting_w
  35. # update for several times
  36. for i in range(num_iterations):
  37. b, w = step_gradient(b, w, np.array(points), learning_rate)
  38. return [b, w]
  39. def run():
  40. points = np.genfromtxt("data.csv", delimiter=",")
  41. learning_rate = 0.0001
  42. initial_b = 0 # initial y-intercept guess
  43. initial_w = 0 # initial slope guess
  44. num_iterations = 1000
  45. print("Starting gradient descent at b = {0}, w = {1}, error = {2}"
  46. .format(initial_b, initial_w,
  47. compute_error_for_line_given_points(initial_b, initial_w, points))
  48. )
  49. print("Running...")
  50. [b, w] = gradient_descent_runner(points, initial_b, initial_w, learning_rate, num_iterations)
  51. print("After {0} iterations b = {1}, w = {2}, error = {3}".
  52. format(num_iterations, b, w,
  53. compute_error_for_line_given_points(b, w, points))
  54. )
  55. loss = []
  56. run()
  57. Starting gradient descent at b = 0, w = 0, error = 5565.107834483211
  58. Running...
  59. After 1000 iterations b = 0.08893651993741346, w = 1.4777440851894448, error = 112.61481011613473
  60. x = [i for i in range(0, len(loss))]
  61. plt.plot(x,loss)
  62. [<matplotlib.lines.Line2D at 0x7f82b2c8bd90>]

png

发表评论

表情:
评论列表 (有 0 条评论,263人围观)

还没有评论,来说两句吧...

相关阅读