top of page

C vs. Python: Speed

Introduction Python is a very popular interpreted scripting language. C is a very popular compiled language. Due to its compiled nature, C is generally faster than Python, but is lower-level, making Python programming quicker and easier than C programming. The questions here are whether or not the extra time taken to run a Python program (without input) will be less cost-effective than its C equivalent and whether runtime time is more important than programming time.

Note: due to technical difficulties, I have placed parentheses around some symbols or removed some tabs from the Python examples The Systems Program I decided to make a simple program that resolves the following system of equations: { x + y = 14 { x^2 + y^2 = 100 I quickly wrote the program in Python and found the answers. Then I translated the same program into C. I knew the same program in C would be relatively longer than the same written in Python, but that's not what I was looking for. But before we get there, here are my results: Python: x = 1 while x <= 14: y = 14 - x print str(x) + "|" + str(y) if x**2 + y**2 == 100: print "match" x = x + 1 C: #include (<)stdio.h(>)

int main() { int x, y, t; for (x = 1; x <= 14; x++) {

y = 14 - x;

printf("%d|%d\n", x, y);

if ((x*x) + (y*y) == 100)



return 0;

} Now, I've always heard that C was always one of fastest languages out there. Running both programs from the terminal, I didn't recognize any difference between the Python program and the C program, so I fired up the terminal in Ubuntu and typed: time ./a.out (The time command, followed by the normal command that could be typed without the "time" prefix, runs the command and times it - here, it is obviously the C program that's being tested) I got 0.001 seconds real time, 0 for the user time, and 0 for the system time. Now, time to test the Python version! time python The figures got a bit scary here: 0.017 seconds for the real time, 0.012 seconds for the user time, and 0.004 seconds for the system time.

Sure, the difference for the real time is only sixteen thousandths of a second, but it can be a significant difference for larger systems that need to perform multiple calculations for long periods of time. The Million Program I decided to take this idea into hand and wrote yet another program that prints all integers between 0 to 1,000,000 including 0, which, of course, is not exactly of the same scale as the aforementioned possibility, but gives the computer a bit more to print out. Python: i = 0 while i (<) 1000000 print i i = i + 1 C: #include (<)stdio.h(>) int main () { int i; for (i = 0; i <> printf ("%d\n", i); return 0; } now, time to test out the programs! C: real 0m24.625s user 0m0.652s sys 0m2.240s Python: real 0m29.805s user 0m1.984s sys 0m1.812s Conclusion

I have to admit each language has its strengths and weaknesses, but from these results, I only want to use Python for quick things like the systems program shown or for prototyping C programs, and C for programs where the time taken to process information matters more.

Either way, goals may be different for different people or different projects - what's your opinion? Afterword After testing and retesting various times, I have found that at times the programs can be faster or slower depending on what else the computer was doing (on my machine, the more processes were being handled, the faster the programs ran, oddly enough).

Recent Posts 
Serach By Tags
No tags yet.
bottom of page