The idea that you can completely test that an application does what it is supposed to do is generally regarded as impracticable at best and impossible at worst. Unless, of course, you are doing something highly dangerous - like running a nuclear reactor, which absolutely has to be tested to the limit. Moreover, theoretically, it is not possible to "completely" test an application. Suppose that you enter a number into a field: there are an infinite number of numbers and you cannot test every one. But, generally speaking, if it works with 2,314, it will probably work with 3,612: you can test sets of numbers rather than numbers themselves. So, let's leave this consideration aside: is it practical to test exhaustively if not "completely"?
The assumption is not. Why is that the case? Think about an application from a user perspective. You start by entering some value into a field. That value belongs to a set of values which you can test for validity. If the entry is valid then something else happens and something rather different happens if the entry is invalid. These require further steps which are either valid or invalid and so on. If you wanted to express this graphically you would use a tree diagram or a flow chart. It's not actually very complicated to conceptualise this.
The question then becomes whether, if you can capture the logic of the application, which is what I have been describing, and if you know all the value sets that apply at each decision point, you can apply the latter to the former and just run it. It's actually an ideal task for a computer to perform - large-scale iterative processing - precisely what the things were built for. Of course, in an agile development environment you probably don't want to do this en masse: you want to test a bit of the logic with the relevant value sets rather than everything all at once. But that's fine - it's just a smaller version of the same thing.
The key question, of course, is whether you can capture the complete logic of the application as well as all of the relevant values that have to be driven through the appropriate test cases. In the first case, we know that this is possible because there are existing products that do exactly that. And we know that we can derive all the relevant values and/or actions: just look at the written test plans that get passed to manual testers. The issue is whether we can generate those values automatically, using a tool. It certainly shouldn't be beyond the wit of man. And, finally, you need an engine to process the whole thing.
So, is exhaustive testing impractical and too expensive? I don't think it is. Or, at least, potentially it isn't. It seems to me perfectly plausible to build a tool suite that will derive test cases from a specification, derive value sets from the same source, and exhaustively run one against the other. It's the sort of automation that all development shops should be looking for.