A recent article on Slashdot discussed the aspect of Google requesting to start using dotless TLDs. While ultimately ICANN denied this request, its interesting to see where the future of the Internet very well could go.
Most filter systems use wildcards but not to a scalable extent (in such a way that future concerns are taken care of as well). It does make sense in some aspects, though. How is anyone to know that dotless domains will ever happen, for example. Another issue is that filtering through a long list of futuristic ideas still adds more overhead to each request, which even when cached can pose some annoyances.
Should filtering be done on a whim, when new techniques/resources are made available or when its best suited for the network? Its tough to say.
When building a LAN you could easily tell the filtering system to deny any requests to *.onion, but then you have to consider how likely it’d be to even set up a Tor node within the network, have it connect successfully and allow applications like web servers. If all of these are also possible then you have more concerns than just filtering which domains are accessible.