For most people, vision is the primary sensory modality, allowing us to navigate through the world and interact with it. It is our means of driving safely through traffic, avoiding obstacles, perceiving food we want to eat, reading, and recognising the face of a loved one. But at any given moment, there is far more information available to process in visual scenes than our brain is capable of processing to the level of awareness. This means that visual attention has a fundamental triaging role to play in shaping our perception of the world, by selecting certain relevant information for privileged processing, while filtering out other information. One of the key ways that humans can regulate their visual attentional resources is the scale or breadth of attention, for example, having a narrow attentional breadth akin to tunnel vision, or having a broad attentional breadth encompassing more of a visual scene. I have two key research interests in relation to this process. The first aspect is studying the perceptual consequences of these different attentional breadths. Historically, it had long been accepted as true that all perceptual tasks benefit from a narrow breadth of attention. The work in my lab challenges this conventional wisdom, and we have developed new theoretical models, based on the neurophysiological visual pathways in the brain, that can predict how different perceptual tasks will be affected (i.e., whether performance will be enhanced or impaired) by different attentional breadths. Second, many real-world tasks require rapid rescaling of attentional breadth (e.g., when driving a car, reading the speedometer requires a narrow focus of spatial attention, whereas monitoring the road for any movement, such as a child approaching the road, or trajectories of other cars, requires a broad focus). Therefore, a current line of research is examining how to promote rapid and efficient rescaling of attention.