Most people who are objecting to it are doing so on the issue of privacy. While I'm not happy about that, either, I object to it on a more fundamental issue that most people are barely mentioning:
IT WON'T WORK.
I've worked with face-recognition systems. They aren't bad, if you train them on many pictures of target individuals in the same lighting as where you will be testing. Some of them advertise 99+% accuracy. But that's in ideal conditions, where every person being scanned stops in exactly the right place and looks straight at the camera. They're implementing it to look at people as they go through security checkpoints - the angles and placement are going to be all over the place.
These systems also can't pick up basic disguises. If the person is wearing a hat, glasses, different hairstyle, different facial hair, makeup, etc - it won't work nearly as well, if at all.
Also, let's look at very basic numbers here.
We would rather detain some non-terrorists than let actual terrorists get away. Hence the software will try to minimize Type II errors (failing to flag a terrorist) at the cost of having more Type I errors (flagging non-terrorists). Let's be generous and say they get their actual, real-world accuracy up to 99.9%. That means one non-terrorist in a thousand will be identified as a terrorist. We'll assume for calculative (is that a word?) simplicity that every terrorist who is in our database will be caught.
How many people go through american airports every day? Well, Pittsburgh airport claims to service 20 million people a year - that's nearly 55,000 a day. So every day in Pittsburgh alone, 5500 people will be flagged as being a possible terrorist.
What percentage of people going through airport security are actually terrorists? Let alone ones we have pictures of on file? Less than one in a thousand, by many orders of magnitude. Say one in a million (I think that's a bit high, personally, especially given that we have to know who they are already). So on average, one terrorist goes through Pittsburgh every eighteen days or so. In that time, ONE HUNDRED THOUSAND normal citizens have been detained as suspect terrorists.
So let's say we take the police-state approach and say that's worth it. Let's look at this from the point of view of the security people enforcing these detainments. One in a hundred thousand is not significantly psychologically more than one in a million. They will learn very quickly that getting identified by the computer doesn't mean a damn thing.
And that doesn't address the problem of what they do when they get someone the computer has identified. Will yet another check of their identification do more good than the two or three everyone already has to do? I doubt we'd have DNA samples of all of the suspected terrorists to match against! Do we refuse to let them travel? All one hundred thousand innocent people? Just because they look like the wrong person?
And finally, it still won't stop terrorism. People our intelligence databases have never seen before will become the active agents. They will discover disguises that can fool the computers. And the larger the database grows, the longer it will take to identify people, the more people will be detained wrongly, and the more impossible air travel will become.
It's just not worth it. It's yet another feel-good measure, trying to raise morale while doing nothing but inconveniencing a whole lot of people.