I created a program to evaluate definite integrals using the Midpoint Rule.
Firstly, the midpoint rule states that:
Since evaluating definite integrals using the midpoint rule is a repetitious method, it would be best if a program would do it.
Increasing the value of n would increase the integral's accuracy, but it would be more difficult to do by hand.
Here is my python code for the definite integral of . You can change the function in the code to integrate any function you want. Increasing the number of midpoints increases accuracy.
Here's the code:
n=int(input('How many midpoints would you like to have? How much accuracy would you like to have? Enter an integer value from 1 to 50000, 50000 being the most precise (you could be more accurate, but what is the point because 50000 gives you more than 10 decimal places accuracy) and 1 being the least precise: '))
b=float(input('what top value of integration would you like to have? Enter: '))
a=float(input('what bottom value of integration would you like to have?Enter: '))
for i in range(0,n):
for i in xValue:
print('Definite integral evaluated by midpoint rule: '+str(definite_integral))