It is a very bad idea to create a huge data set with a size of 2 TB as text files. Do the files have a fixed width for all lines? This would at least allow a binary search in each file. If you search linearily through 2 TB of text files millions of times, you might wait for years to get all results. With a modern data base a processing time in the magnitude of minutes is more likely.
Please blame the person who decided to use text files.
But now you have these files. Of course you can use a binary search: Find the central element of each file. If the row width is fixed, this is trivial. If not, start at the central byte and search for the next (or previous) line break. If the found value if higher than the searched one, use the first half of the file for the next search, or the seconds half in the other case. This requires log2(nData) file accesses only, where nData is the number of data sets.
If the input data is sorted, you can use this information also to refine the search by choosing a start point at the formerly found value.