It all depends on the logic you intend on testing, and when the test fails, how large an area of the application you'll need to investigate.
A full end-to-end test verifies data on screen. Doing this in BDD certainly replicates that test, but the behavior is what BDD is about. What behavior are you testing? How many reasons does your test have to fail?
If you replicate the full end-to-end test in BDD, and it fails because something does not appear on screen, you need to investigate quite a bit of code for the failure. If the test creates some data via the user interface, then verifies that data appears on screen on a different page, then what part failed? Did the "create data" page fail? Did the "view data" page fail? Did the call to a web API fail? Did the web API itself fail?
It becomes frustrating to debug this failing test, because you have so much to look at. Despite the frustrations a full end-to-end test is useful. It ensures all the parts work together. You do not necessarily need to add data to the system using the UI. It is appropriate to make direct database calls or web API calls from your
Given steps. This is nice, because the tests will likely run faster, and the tests are less likely to fail on a
Given step. Ideally you want a test to fail on a
Given thing A exists # Calls database
And another thing has been done # Calls web API
When I do the thing # Selenium interacts with the browser
Then something should change # Selenium verifies info on screen
Full end-to-end tests should be much fewer in number than the more fine grained tests asserting certain variations of behavior. Those require their own tests.
If you intend on testing the UI layer, then a full end-to-end test is appropriate. If you want to test the APIs themselves, I would not involve the user interface at all. Have your cucumber steps call the APIs directly.