Consciousness is definitely an interesting problem. Some years ago, I spent a while looking into it, inspired by a 2004 book, Mind, by John Searle. The result was my 2008 paper, Drinking from the Firehose of Experience.
Of course, a fair amount of time has gone by since then, including the deep neural network revolution, so it may be possible to get new traction on this problem today. The question is whether there is currently something new and interesting that one can do, particularly in a short period of time. See if you can convince me that this would be worth doing.
The Easy Problem is: What does consciousness do for you? How does it help a creature with a mind to survive and thrive? Another way to ask the same question is, What role does consciousness play in the cognitive architecture of a mind? Look at John Laird's work on SOAR, describing his approach to cognitive architecture. John Searle's book also implicitly refers to the needs of a cognitive architecture.
The Hard Problem is (more or less): How can consciousness feel like anything at all? This problem is Hard because we can't even understand what it would mean to answer this question. I have only a little to say about this in my paper.
One useful way to explore the properties of consciousness is to consider agents that do and don't have consciousness, and how well they function. Each of us is quite certain that we ourselves are conscious. Why? What convinces us? Do we know that other people are conscious, or do we simply act on that assumption out of courtesy? Many people (including me) believe that dogs and cats are conscious, though perhaps in a lesser way than humans. Oysters and trees, very likely not. Is there a boundary between the kinds of creatures that do have some sort of consciousness, and those that don't? Personally, I suspect that it may not actually be useful to search for this boundary.
There is an argument that corporations are artificially intelligent creatures. They clearly act in the real world; they control a lot of resources; they are "legal persons"; they can work on (and often solve) difficult problems. They typically have people as replaceable parts. I don't believe corporations are conscious, in anything like the way that humans are conscious. If corporations are very capable, but not conscious, what does that tell us about consciousness?
If you start with these ideas, do they shed light on the nature of consciousness? What sort of computational experiment would help us make sense of this phenomenon?