Quantcast
Channel: Media Foundation Development for Windows Desktop forum
Viewing all 1079 articles
Browse latest View live

MFStartup fails with MF_E_BAD_STARTUP_VERSION even with Platform Update Supplement for Windows Vista installed

$
0
0

I'm attempting to use certain interfaces, such as IMFSinkWriter, on Vista, but I'm having the same problem as this guy:

http://stackoverflow.com/questions/7403687/how-to-use-mediafoundation-imfsourcereader-on-vista-sp2-with-platform-update

I have verified that the Platform Update Supplement for Windows Vista is installed on the machine. I'm building against the Windows SDK 7.1 and am setting the target platform to WINVER=0x0601 since I'm using an interface that's otherwise not available on Vista.

However, MFStartup() fails with MF_E_BAD_STARTUP_VERSION even though the Platform Update Supplement for Windows Vista is supposed to make this work.

Help!


IMFTransform::ProcessInput method returning negative value in Windows 8

$
0
0

Hi Guys,

I am running  the same code which is using MFT in windows 7 and windows 8 system but in windows 8 ProcessInput method is returning negative value of -1072875851 . I am using VS2008

I am new to this concept so any kind of inputs will be much appreciated :)

Thanks

Neha

RTP Multicast Stream - Another Approach

$
0
0

Dear Community,

Beside my Tests (rtp ts stream and media foundation) I tried another approach to simplify my quest.

When I store my rtp payload the hard disk and load the file into the media element, the stream is rendered. So I made a Custom Byte stream that connects to the rtp source and delivers the data via the BeginRead Event. Because the source must support seeking, i store the packet data in memory, so i can peek the data, that the media element requests. My problem is, that the media element buffers a big amount of data, before it starts playing. Also setting IS_REMOTE capability changes nothing. In my understanding, IS_REMOTE should signal, that the source is a remote source, and media element should not require seeking...

Is is possible to make something like a custom network byte stream, where media element does not require seeking and more important, where media element does not require to buffer a huge amount of data?

Best regards,

Michael

How does IMFVideoDisplayControl::RepaintVideo Method work

$
0
0

Hello everyone,

I seem to have a problem with IMFVideoDisplayControl::RepaintVideo Method. It doesn't always repaint the Video. 

Can you please explain when this function has an effect. Does the player have to be paused, or could it be used while scrubbing (play rate 0)? 

In the documentation it is stated that the function should be called whenever a WM_PAINT message is posted. Another issue I have is that under Win 7 this seems to work but under Win 8.1 it doesn't (e.g. when the video window is resized).

Your help is much appreciated,

Alin

Digital Audio Workstation

$
0
0

Hi there.

I have been intrigued for a long time now with the idea of creating DAW with full vst support, and MIDI support on the .net framework. I was wondering if anyone was interested in helping me to do this project.

Is the .net framework robust enough to support low latencies in this DSP environment.

Some Ideas:

Full Audio mixing.

Midi VSTi support.

Vst plugin support.

Modular DSP design.

Etc.

Please contact me.

zzypventer@gmail.com

Capture Remote Camera using Source Reader

$
0
0
Hello Audio/Video gurus. I have managed to render a local .mp4 and .wmv files using Source Reader. I pass the path of the local file in the funtion MFCreateSourceReaderFromURL. .WMV plays fine but this .mp4 plays little slower.
 
I tried to do the same for rendering a camera on my LAN and passed it's URL into the said function. But it gives error. Error code is 0xc00d36c4. Watching this error's description, it says that byte stream of the given URL is unsupported.
 
Question is: DO I have to write my own media source or something to get RTSP packets or this could be done by playing with Source Reader's properties/attributes? Thanks for your input.
 
NOTE: I have not tried to interact with or modify any networking related property of Source Reader except passing this camera URL.

Prevent caching for HTTP video stream

$
0
0

I have real trouble with video stream that I am getting from IP camera. It creates cache file in Temporary Interned Files folder, and after 4 days completely trashed my HDD. I found one topic where developer tried to use property that prevents caching, but had no success. I really dont want to write my own schema resolver but seems to me no other option. Here what I am doing:

CComPtr<IPropertyStore> pPropStore;
IpCamCredentialManager *pCredentials = new (std::nothrow) IpCamCredentialManager(username, pass);
// Configure the property store.
hr = PSCreateMemoryPropertyStore(IID_PPV_ARGS(&pPropStore));
if (SUCCEEDED(hr))
{
// Credential property
PROPERTYKEY key;
key.fmtid =  MFNETSOURCE_CREDENTIAL_MANAGER;
key.pid = 0;
PROPVARIANT var;
var.vt = VT_UNKNOWN;
pCredentials->QueryInterface(IID_PPV_ARGS(&var.punkVal));

hr = pPropStore->SetValue(key, var);
PropVariantClear(&var);

// NO CACHE property
key.fmtid = MFNETSOURCE_CACHEENABLED;
key.pid = 0;
var.vt = VT_I4;
var.lVal = VARIANT_FALSE;

hr = pPropStore->SetValue(key, var);
PropVariantClear(&var);
}

CComPtr<IUnknown> pCancelCookie;
LOG_TRACE("Try to open URL: " << sURL);

hr = pSourceResolver_->BeginCreateObjectFromURL(
sURL.c_str(),               // URL of the source.
MF_RESOLUTION_MEDIASOURCE | 
MF_RESOLUTION_CONTENT_DOES_NOT_HAVE_TO_MATCH_EXTENSION_OR_MIME_TYPE,  
pPropStore,                 // Optional property store for extra parameters
&pCancelCookie,
this,
NULL);
BREAK_ON_FAIL(hr);

This code works fine but cache file gets created and it is real problem.

Any ideas ?

Thanks

Aleksey

Media Foundation App gets code C00D36C4 but only on occasions

$
0
0

This seems to be a fairly common problem, though I have quite a combination.

I have an IPhone app that I am testing on a 4S. It:

  • Takes a short video (5-30 seconds, depending on circumstances)
  • Exports it (rotation, frame rate, later adding timestamps and status text layer)
  • Uploads it to a PC server via socket
  • Repeats until circumstances change.

The PC server (Windows C) saves each file (they are MOV, (H.264 (AVC) and AAC) and then decides if the video needs to be played to the operator. If so, a Windows C++ app based on Media Foundation MF_BasicPlayback is started and a WM_COPYDATA link is used to exchange Window handles and then send names of videos to be played. The C++ queues the requests and plays them to completion one at a time. This is repeated until the server sends a stop message to the C++ app, which then closes down. Alternatively, the C++ application can send a WM_COPYDATA command to the server to instruct it to shut the video display link down.

Every now and then: maybe 1 in 10, but it varies, things go wrong and I get a message box re C00D36C4. This shows bad data, but if I test out the files with Movie Maker, they all run fine. Any ideas?

I am not sure how to tell which release of media foundation I am running. I am running Windows 7 and have got Windows SDK 7 installed, with MFPlay.H dated 04/19/2010. I also have SDK 8.1, with date 08/21/2013. Should I be copying one of these files (and others?) into my project directory?

I am using Microsoft Visual C++ 2010 Express

Ed


ert304



I found the problem. It was an invalid file name. Is the message text correct for such an error?

MFT H264 Issue

$
0
0
Hi All,
     for our purposes we need to decode multiple H264 streams, likely with different resolutions, at the same time.
For this reason we want to employ the Media Foundation H264 Decoder, with the DXVA option enabled ( it is a requirement ):
we wrote a class which successfully uses the MFT as decoder, no topology is involved,  
each instance of this class has its own thread  but here comes our issue: the more instances we have,
the more stucked our GUI becomes.
We can report the following case: with seven Full HD streams, the CPU usage seems acceptable but the GUI tends to freeze.
Reducing to five the streams, keeping the same resolution, our GUI seems not suffering such a problem.
Now, we know that hardware assisted decoding is a limited resouce,  so, is there a way,
either an API call or some other method call for retrieving the maximum number of MFT, with DXVA enabled, that
we can instanciate?
         
Thanks in advance to all.

How can use MFCreateSourceReaderFromURL to access video file in metro?

$
0
0

Hello,

I use the MFCreateSourceReaderFromURL to load video file and shows thumbnails in my metro project. If i add a video file to the "Assets" dictionary in my project, it's work. But if remove the video from project or select a another file by FileOpenPicker, it shows "E_ACCESSDENIED General access denied error.", the code as bellow.

 // Create the source reader from the URL.

    if (SUCCEEDED(hr))
    {
        hr = MFCreateSourceReaderFromURL(wszFileName, pAttributes, &m_pReader);

        //wszFileName = C:\Users\xxx\Videos\test.mp4

    }

So whether the win32 API can't access the file exclude the project in metro or i make a mistake? I appreciate if someone can tell me the solution.



Wave streaming is cut while played in WMP

$
0
0

Hi,

We're using WMPLib as an embedded player in our C# application but this issue occurs also in Windows Media Player.

Our server streams audio and has streaming with offset enabled. Given URL:
http://ourserver.com/playback?id=400 which returns an octet or x-wave stream (both does not work).

The WMP starts buffering and playing the 2 mins long wave file from given URL (everything works fine) but when I grab and drop the track slider to something about 90% of its length, the playback is cut off and WMP displays general audio error (c00d11b1). When I drop the slider closer (10-20% further) the playback continues without any errors and is played with offset properly. I'm using Windows 7 Home Premium 64 bit.

When I'm using the same URL in other media player applications (VLC), no error occurs.

What causes such issue?

TCP packets from this situation:

WMP->Server = get the wave

GET http://ourserver.com/playback?id=400
Cache-Control: no-cache
Connection: Keep-Alive
Pragma: getIfoFileURI.dlna.org
Accept: */*
Cookie: PHPSESSID=7e5156ec44280a9210570158c5d31475
User-Agent: NSPlayer/12.00.7601.17514 WMFSDK/12.00.7601.17514
GetContentFeatures.DLNA.ORG: 1
Host: 192.168.0.5

Server->WMP - return file

HTTP/1.1 200 OK
X-Powered-By: PHP/5.3.6
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Cache-Control: no-cache
Pragma: no-cache
Content-Description: File Transfer
Content-Type: application/octet-stream
Content-Transfer-Encoding: binary
Content-Dis; filename="sound.wav"
Content-Length: 833658
Accept-Ranges: bytes
Date: Tue, 02 Apr 2013 10:46:01 GMT
Server: lighttpd/1.4.28

The sound is played, we move the slider...

WMP->Server - get the sound with offset (set in range)

GET http://ourserver.com/playback?id=400
Cache-Control: no-cache
Connection: Keep-Alive
Pragma: getIfoFileURI.dlna.org
Accept: */*
Cookie: PHPSESSID=7e5156ec44280a9210570158c5d31475
Range: bytes=733184-833657
User-Agent: NSPlayer/12.00.7601.17514 WMFSDK/12.00.7601.17514
GetContentFeatures.DLNA.ORG: 1
Host: 192.168.0.5

Server->WMP - returns sound 

HTTP/1.1 206 Partial Content
X-Powered-By: PHP/5.3.6
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Content-Range: bytes 733184-833657/
Cache-Control: no-cache
Pragma: no-cache
Content-Description: File Transfer
Content-Type: application/octet-stream
Content-Transfer-Encoding: binary
Content-Dis; filename="sound.wav"
Content-Length: 833658
Accept-Ranges: bytes
Date: Tue, 02 Apr 2013 10:46:10 GMT
Server: lighttpd/1.4.28

The stream continues but WMP cuts off playback and shows error.

How can we fix this?

Thanks for any help.



Video with alpha channel ... comes out opaque

$
0
0
Hi,

   Got an uncompressed avi with ARGB frames, each with an alpha channel.

When I render this with media foundation, the frames are decoded, but the alpha seems to be ignored and the frames come out visible, but opaque.

Please could Microsoft supply a sample application that demonstrates video streams that include alpha channels decoding correctly.

Thank you.

Best regards,
Steve Williams
Advance Software

multi application support

$
0
0

I plan to design MFT filter that connected after the source filter.

both of them will be part of source reader.

i have two questions:

1. windows 8 have request to support in this architecture for multi-application.

2. media foundation has multi-application capability or my MFT will duplicate processing in two different processes.

 

Decode H264 stream using H264 Decoder MFT

$
0
0
Hello. I am trying to decode H264 video stream using H264 decoder MFT. I take the compressed data from a local mp4 file. Here is how I am approaching this problem.

1- Get the decoder using MFTEnumEx().
2- Get the stream IDs using IMFTransform::GetStreamIDs(). Since it is a decoder it will have only one input and out stream.
3- Set the input type of IMFTransform. I set the same type as I set on IMFSourceReader when reading the compressed frames from mp4 file.

4- Set the output type of IMFTransform. Same type for getting uncompressed frames from IMFSourceReader.
5- I allocate the output buffer (IMFSample) because dwFlags of _MFT_OUTPUT_STREAM_INFO_FLAGS is 7 when I callIMFTransform::GetOutputStreamInfo .

6- Now I start getting compressed from IMFSourceReader in a loop. I get the compressed data and pass it toIMFTransform::ProcessInput();
7- Then I call IMFTransform::ProcessOutput(). This gives errorMF_E_TRANSFORM_NEED_MORE_INPUT
8- I keep on getting data from source reader, calling ProcessInput and the ProcessOutput. But ProcessOutput() always returns MF_E_TRANSFORM_NEED_MORE_INPUT.

9- I even tried merging two samples using IMFSample::AddBuffer() and even merging two buffers of the same sample usingIMFSample::ConvertToContoguousBuffer(). But still the same.

Is anyone of you have any idea what am I doing wrong? Should I be calling ProcessInput at all? If yes then why, because I am already giving it the encoded data to produce output. Thanks in advance for any pointer.

Video Capture on recent Windows 8.1 Tablets shows very dark video

$
0
0

Hello,

I switched from direct show to media foundation to capture video from webcams. It is a desktop application and works well with both direct show and media foundation on Windows 7 and Windows 8.1 desktop computers for a lot of different webcams.

Trying the same application on a Windows 8.1 Atom based tablet, the video is very dark and green.

I tried it on the following tablets (all of them show the above described behavior):

-Acer T100A (camera sensor MT9M114, Atom 3740)

-Dell Venue Pro 11 (camera sensor OV2722 front, IMX175 back - Atom 3770)

-HP Omni 10 5600 (camera sensor OV2722, IMX175 - Atom 3770)

I capture using IMFMediaSession, building a simple topology with a media source and the EVR.

  • TopoEdit shows the same strange behavior
  • MFTrace does not show any errors (at least I do not see any errors)
  • In case an external usb camera is used on all these tablets, the video is fine.
  • The SDK Sample MFCapture3d3 works fine, it uses the source reader for capturing - I verified the media type of the source used there, it is the same I use in my application (same stream descriptor, same media type, verified with mftrace)
  • The "CaptureEngine" video capture sample from the SDK also works as expected, however, I need Windows 7 compatibility and would like to use the same source on both platforms
  • When using direct show, all the above mentioned tablets show only a fraction of the sensor image when capturing with lower resolutions (e.g. 640x360), the colors of the video are fine. I tried it with the desktop app of Skype and GraphEdit, same behavior (only a fraction of the video is shown, colors are fine) - Skype for destkop apparently uses a DirectShow source filter.

Has anyone tried capturing the camera of an Atom z3700 series tablet with media foundation using the media session? If so, is special handling of the media source required on these tablets?

If required, I will post some code or mftrace logs.

Thanks a lot,

Karl


Bluemlinger






0xc000007b Windows 8.1 (64 bit)

$
0
0

Hey there,

I have installed new Windows 8.1 (64 bit). Whenever I install and play heavy games like Black Ops or FIFA or any other game. It gives me an error - "The Application was unable to start correctly (0xc000007b). Click OK to close the application."

Let me tell you. I have tried to find all missing DLL files or C++ thingy and other basics. I have googled it too. But could not find a proper reply or reason of this error. Please help me as soon as possible. 

Thanks

Deepak



Show last frame from video 1 while starting video 2

$
0
0

I am writing a system that displays multiple short videos taken back to back. My code is based on player.cpp example.

At the end of each video, there is a black flash on the video screen. Is there any way of holding the last frame of the video that just completed in the window while the next video is starting?

Ed


ert304

Media sink as a socket or a named pipe

$
0
0

Hi All,

I am trying to capture video stream from a webcam and for the sink I want to display the video in some other application. Is it possible that the media sink writes the data to a socket or a named pipe, so that I can access the data from separate application? Please note that I don't want to use an archive sink and then read the data from there with the other application as it has to be a live streaming. 

Thanks.

  

Transcoding to file vs transcoding to HTTP stream

$
0
0

Hello,

After spent week on the problem I need some help. Here the problem:

1. I have MJPEG camera and I successfully created source. It works great and produces MEDIATYPE_Video/MFVideoFormat_MJPG

So far so good.

2. Next step I wanted to use this source to make output to ASF file and it also works great. From traces I can see the topo:

MySource:MFVideoFormat_MJPEG->mfmjpegdec:MFVideoFormat_YUY2->MF:MFVideoFormat_IYUY->wmvencod:MFVideoFormat_WMV3,

I can see the file, play it, great!

So last thing I need to make HTTP stream and be able to re-stream MJPEG as ASF. And here I am failed. I use sample from great book "Developing MS MF Applications"

First how topo looks:

MySource:MFVideoFormat_MJPEG->TEE->MFVideoFormat_MJPEG->MF and this is it!!! Strange, why it didnt add ASF encoder?

I also tried to add manually MJPG/WMV transcorers but from traces they were not properly resolved which is also strange

I would be greatly appreciate it anybody have at least guess what is wrong!!!

Aleksey

Here the code, quite a lot, but it is really staitforward:

start(){

        hr = CreateMediaSource(url);
        hr = CreateNetworkSink(8080);
        hr = CreateTopology();

}

CreateMediaSource() - so far it is not interesting

HRESULT CTopoBuilder::CreateNetworkSink(DWORD requestPort)
{
    HRESULT hr = S_OK;
    CComPtr<IMFPresentationDescriptor> pPresDescriptor;
    CComPtr<IMFASFProfile> pAsfProfile;
    CComQIPtr<IMFASFContentInfo> pAsfContentInfo;
    
    CComPtr<IMFActivate> pByteStreamActivate;
    CComPtr<IMFActivate> pNetSinkActivate;

    do
    {
        BREAK_ON_NULL(m_pSource, E_UNEXPECTED);

        // create an HTTP activator for the custom HTTP output byte stream object
        pByteStreamActivate = new (std::nothrow) CHttpOutputStreamActivate(requestPort);
        BREAK_ON_NULL(pByteStreamActivate, E_OUTOFMEMORY);
        
        // create the presentation descriptor for the source
        hr = m_pSource->CreatePresentationDescriptor(&pPresDescriptor);
        BREAK_ON_FAIL(hr);

        // create the ASF profile from the presentation descriptor
        hr = MFCreateASFProfileFromPresentationDescriptor(pPresDescriptor, &pAsfProfile);
        BREAK_ON_FAIL(hr);

       // create the ContentInfo object for the ASF profile
        hr = MFCreateASFContentInfo(&pAsfContentInfo);
        BREAK_ON_FAIL(hr);

        // set the profile on the content info object
        hr = pAsfContentInfo->SetProfile(pAsfProfile);
        BREAK_ON_FAIL(hr);

        // create an activator object for an ASF streaming sink
        hr = MFCreateASFStreamingMediaSinkActivate(pByteStreamActivate, pAsfContentInfo, 
            &m_pNetworkSinkActivate);
        BREAK_ON_FAIL(hr);
    }
    while(false);
    return hr;
}

HRESULT CTopoBuilder::CreateTopology(void)
{
    HRESULT hr = S_OK;
    CComQIPtr<IMFPresentationDescriptor> pPresDescriptor;
    DWORD nSourceStreams = 0;

    do
    {
        // release the old topology if there was one        
        m_pTopology.Release();
        
        // Create a new topology.
        hr = MFCreateTopology(&m_pTopology);
        BREAK_ON_FAIL(hr);

        // Create the presentation descriptor for the media source - a container object that
        // holds a list of the streams and allows selection of streams that will be used.
        hr = m_pSource->CreatePresentationDescriptor(&pPresDescriptor);
        BREAK_ON_FAIL(hr);

        // Get the number of streams in the media source
        hr = pPresDescriptor->GetStreamDescriptorCount(&nSourceStreams);
        BREAK_ON_FAIL(hr);

        // For each stream, create source and sink nodes and add them to the topology.
        for (DWORD x = 0; x < nSourceStreams; x++)
        {
            hr = AddBranchToPartialTopology(pPresDescriptor, x);
            
            // if we failed to build a branch for this stream type, then deselect it
            // that will cause the stream to be disabled, and the source will not produce
            // any data for it
            if(FAILED(hr))
            {
                hr = pPresDescriptor->DeselectStream(x);
                BREAK_ON_FAIL(hr);
            }
        }
    }
    while(false);

    return hr;
}

HRESULT CTopoBuilder::AddBranchToPartialTopology(
    CComPtr<IMFPresentationDescriptor> pPresDescriptor, 
    DWORD nStream)
{
    HRESULT hr = S_OK;
    CComPtr<IMFStreamDescriptor> pStreamDescriptor;
    CComPtr<IMFTopologyNode> pSourceNode;
    CComPtr<IMFTopologyNode> pOutputNode;
    BOOL streamSelected = FALSE;

    do
    {
        BREAK_ON_NULL(m_pTopology, E_UNEXPECTED);

        // Get the stream descriptor for this stream (information about stream).
        hr = pPresDescriptor->GetStreamDescriptorByIndex(nStream, &streamSelected, &pStreamDescriptor);
        BREAK_ON_FAIL(hr);

        // Create the topology branch only if the stream is selected - IE if the user wants to play it.
        if (streamSelected)
        {
            // Create a source node for this stream.
            hr = CreateSourceStreamNode(pPresDescriptor, pStreamDescriptor, pSourceNode);
            BREAK_ON_FAIL(hr);

            // Create the output node for the renderer.
            hr = CreateOutputNode(pStreamDescriptor, m_videoHwnd, pSourceNode, &pOutputNode);
            BREAK_ON_FAIL(hr);
         

            // Add the source and sink nodes to the topology.
            hr = m_pTopology->AddNode(pSourceNode);
            BREAK_ON_FAIL(hr);

            hr = m_pTopology->AddNode(pOutputNode);
            BREAK_ON_FAIL(hr);
            

            // Connect the source node to the output node.  The topology will find the
            // intermediate nodes needed to convert media types.
            hr = pSourceNode->ConnectOutput(0, pOutputNode, 0);
        }
    }
    while(false);

    return hr;
}
HRESULT CTopoBuilder::CreateSourceStreamNode(
    CComPtr<IMFPresentationDescriptor> pPresDescriptor,
    CComPtr<IMFStreamDescriptor> pStreamDescriptor,
    CComPtr<IMFTopologyNode> &pNode)
{
    HRESULT hr = S_OK;

    do
    {
        BREAK_ON_NULL(pPresDescriptor, E_POINTER);
        BREAK_ON_NULL(pStreamDescriptor, E_POINTER);

        // Create the topology node, indicating that it must be a source node.
        hr = MFCreateTopologyNode(MF_TOPOLOGY_SOURCESTREAM_NODE, &pNode);
        BREAK_ON_FAIL(hr);

        // Associate the node with the source by passing in a pointer to the media source,
        // and indicating that it is the source
        hr = pNode->SetUnknown(MF_TOPONODE_SOURCE, m_pSource);
        BREAK_ON_FAIL(hr);

        // Set the node presentation descriptor attribute of the node by passing 
        // in a pointer to the presentation descriptor
        hr = pNode->SetUnknown(MF_TOPONODE_PRESENTATION_DESCRIPTOR, pPresDescriptor);
        BREAK_ON_FAIL(hr);

        // Set the node stream descriptor attribute by passing in a pointer to the stream
        // descriptor
        hr = pNode->SetUnknown(MF_TOPONODE_STREAM_DESCRIPTOR, pStreamDescriptor);
        BREAK_ON_FAIL(hr);
    }
    while(false);

    return hr;
}
HRESULT CTopoBuilder::CreateOutputNode(
    CComPtr<IMFStreamDescriptor> pStreamDescriptor,
    HWND hwndVideo,
    IMFTopologyNode* pSNode,
    IMFTopologyNode** ppOutputNode)
{
    HRESULT hr = S_OK;
    CComPtr<IMFMediaTypeHandler> pHandler = NULL;
    CComPtr<IMFActivate> pRendererActivate = NULL;
    CComPtr<IMFTopologyNode> pSourceNode = pSNode;
    CComPtr<IMFTopologyNode> pOutputNode;

    GUID majorType = GUID_NULL;

    do
    {
        if(m_videoHwnd != NULL)
        {
            // Get the media type handler for the stream which will be used to process
            // the media types of the stream.  The handler stores the media type.
            hr = pStreamDescriptor->GetMediaTypeHandler(&pHandler);
            BREAK_ON_FAIL(hr);

            // Get the major media type (e.g. video or audio)
            hr = pHandler->GetMajorType(&majorType);
            BREAK_ON_FAIL(hr);

            // Create an IMFActivate controller object for the renderer, based on the media type.
            // The activation objects are used by the session in order to create the renderers only when 
            // they are needed - IE only right before starting playback.  The activation objects are also
            // used to shut down the renderers.
            if (majorType == MFMediaType_Audio)
            {
                // if the stream major type is audio, create the audio renderer.
                hr = MFCreateAudioRendererActivate(&pRendererActivate);
            }
            else if (majorType == MFMediaType_Video)
            {
                // if the stream major type is video, create the video renderer, passing in the video
                // window handle - that's where the video will be playing.
                hr = MFCreateVideoRendererActivate(hwndVideo, &pRendererActivate);
            }
            else
            {
                // fail if the stream type is not video or audio.  For example fail
                // if we encounter a CC stream.
                hr = E_FAIL;
            }

            BREAK_ON_FAIL(hr);

            // Create the node which will represent the renderer
            hr = MFCreateTopologyNode(MF_TOPOLOGY_OUTPUT_NODE, &pOutputNode);
            BREAK_ON_FAIL(hr);

            // Store the IActivate object in the sink node - it will be extracted later by the
            // media session during the topology render phase.
            hr = pOutputNode->SetObject(pRendererActivate);
            BREAK_ON_FAIL(hr);
        }

        if(m_pNetworkSinkActivate != NULL)
        {
            CComPtr<IMFTopologyNode> pOldOutput = pOutputNode;
            pOutputNode = NULL;
            hr = CreateTeeNetworkTwig(pStreamDescriptor, pOldOutput, &pOutputNode);
            BREAK_ON_FAIL(hr);
        }

        *ppOutputNode = pOutputNode.Detach();
    }
    while(false);

    return hr;
}

HRESULT CTopoBuilder::CreateTeeNetworkTwig(IMFStreamDescriptor* pStreamDescriptor, 
    IMFTopologyNode* pRendererNode, IMFTopologyNode** ppTeeNode)
{
    HRESULT hr = S_OK;
    CComPtr<IMFTopologyNode> pNetworkOutputNode;
    CComPtr<IMFTopologyNode> pTeeNode;
    DWORD streamId = 0;

    do
    {
        BREAK_ON_NULL(ppTeeNode, E_POINTER);

        // if the network sink is not configured, just exit
        if(m_pNetworkSinkActivate == NULL)
            break;

        // get the stream ID
        hr = pStreamDescriptor->GetStreamIdentifier(&streamId);
        BREAK_ON_FAIL(hr);

        // create the output topology node for one of the streams on the network sink
        hr = MFCreateTopologyNode(MF_TOPOLOGY_OUTPUT_NODE, &pNetworkOutputNode);
        BREAK_ON_FAIL(hr);

        // set the output stream ID on the stream sink topology node
        hr = pNetworkOutputNode->SetUINT32(MF_TOPONODE_STREAMID, streamId);
        BREAK_ON_FAIL(hr);

        // associate the output network topology node with the network sink
        hr = pNetworkOutputNode->SetObject(m_pNetworkSinkActivate);
        BREAK_ON_FAIL(hr);

        // add the network output topology node to the topology
        m_pTopology->AddNode(pNetworkOutputNode);
        BREAK_ON_FAIL(hr);
        
        
        // create the topology Tee node
        hr = MFCreateTopologyNode(MF_TOPOLOGY_TEE_NODE, &pTeeNode);
        BREAK_ON_FAIL(hr);

        // connect the first Tee node output to the network sink node
        hr = pTeeNode->ConnectOutput(0, pNetworkOutputNode, 0);
        BREAK_ON_FAIL(hr);

        // if a renderer node was created and passed in, add it to the topology
        if(pRendererNode != NULL)
        {
            // add the renderer node to the topology
            hr = m_pTopology->AddNode(pRendererNode);
            BREAK_ON_FAIL(hr);

            // connect the second Tee node output to the renderer sink node
            hr = pTeeNode->ConnectOutput(1, pRendererNode, 0);
            BREAK_ON_FAIL(hr);
        }

        // detach the Tee node and return it as the output node
        *ppTeeNode = pTeeNode.Detach();
    }
    while(false);

    return hr;
}

Get position (in time) of the last frame.

$
0
0

Hello,

Is there a way to find the time (MFTime format) of the last frame of a video. If I check the duration of the file the value that I get is always (as far as my tests go) false. It is always a little off. Is there a way to determine the actual time of the last frame. As a constraint: stepping through the file (the portion at the end) and finding the last frame is not an option. Or can I do this without rendering the video (with a valid window handle)?

Thanks.

Viewing all 1079 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>